I\'m recording video and audio using AVCaptureVideoDataOutput
and AVCaptureAudioDataOutput
and in the captureOutput:didOutputSampleBuffer:fromCon
Swift version of Bannings's answer.
let combinedFilter = CIFilter(name: "CISourceOverCompositing")!
combinedFilter.setValue(maskImage.oriented(.left), forKey: "inputImage")
combinedFilter.setValue(inputImage, forKey: "inputBackgroundImage")
let outputImage = combinedFilter.outputImage!
let tmpcontext = CIContext(options: nil)
tmpcontext.render(outputImage, to: pixelBuffer, bounds: outputImage.extent, colorSpace: CGColorSpaceCreateDeviceRGB())
I asked Apple DTS about this same issue as all approaches I had were running really slow or doing odd things and they sent me this:
https://developer.apple.com/documentation/avfoundation/avasynchronousciimagefilteringrequest?language=objc
Which got me to a working solution really quickly! you can bypass the CVPixelBuffer altogether using CIFilters, which IMHO is much easier to work with. So if you don't actually NEED to use CVPixelBuffer, then this approach will become your new friend quickly.
A combination of CIFilter(s) to composite the source image and the image with the text I generated for each frame did the trick.
I hope this helps someone else!
You can also use CoreGraphics
and CoreText
to draw directly on top of the existing CVPixelBufferRef
if it's RGBA (or on a copy if it's YUV). I have some sample code in this answer: https://stackoverflow.com/a/46524831/48125
Do you want to as below?
Instead of using CIBlendWithMask
, you should use CISourceOverCompositing
, try this:
//4.
CIFilter *filter = [CIFilter filterWithName:@"CISourceOverCompositing"];
[filter setValue:maskImage forKey:kCIInputImageKey];
[filter setValue:inputImage forKey:kCIInputBackgroundImageKey];
CIImage *outputImage = [filter outputImage];