Making CIContext.render(CIImage, CVPixelBuffer) work with AVAssetWriter

前端 未结 2 1058
南方客
南方客 2021-01-03 06:45

I want to use Core Image for processing a bunch of CGImage objects and turning them into a QuickTime movie on macOS. The following code demonst

相关标签:
2条回答
  • 2021-01-03 07:32

    After speaking with Apple Developer Technical Support it appears that:

    Core Image defers the rendering until the client requests the access to the frame buffer, i.e. CVPixelBufferLockBaseAddress.

    So, the solution is simply to do CVPixelBufferLockBaseAddress after calling CIContext.render as shown below:

    for frameNumber in 0 ..< frameCount {
        var pixelBuffer: CVPixelBuffer?
        guard let pixelBufferPool: CVPixelBufferPool = pixelBufferAdaptor.pixelBufferPool else { preconditionFailure() }
        precondition(CVPixelBufferPoolCreatePixelBuffer(nil, pixelBufferPool, &pixelBuffer) == kCVReturnSuccess)
    
        let ciImage = CIImage(cgImage: frameImage)
        context.render(ciImage, to: pixelBuffer!)
    
        precondition(CVPixelBufferLockBaseAddress(pixelBuffer!, []) == kCVReturnSuccess)
        defer { precondition(CVPixelBufferUnlockBaseAddress(pixelBuffer!, []) == kCVReturnSuccess) }
    
        let bytes = UnsafeBufferPointer(start: CVPixelBufferGetBaseAddress(pixelBuffer!)!.assumingMemoryBound(to: UInt8.self), count: CVPixelBufferGetDataSize(pixelBuffer!))
        precondition(bytes.contains(where: { $0 != 0 }))
    
        while !input.isReadyForMoreMediaData { Thread.sleep(forTimeInterval: 10 / 1000) }
        precondition(pixelBufferAdaptor.append(pixelBuffer!, withPresentationTime: CMTime(seconds: Double(frameNumber) * frameRate, preferredTimescale: 600)))
    }
    
    0 讨论(0)
  • 2021-01-03 07:36

    For your use case it would be better to use the pull-style APIs of AVAssetWriterInput because you don't need to process any media in real-time (like you would when capturing from a camera).

    So rather then pausing the thread when the input isn't ready, just wait for it to pull the next frame. Remember to also set expectsMediaDataInRealTime to false in this case.

    I think the main problem with you current approach is that you pause the very thread that the video processing is happening in when the writer is not yet ready.

    (By the way: you can create CIImages with a solid color directly (CIImage(color:)); no need to create a CGImage first.)

    0 讨论(0)
提交回复
热议问题