问题
I'm attempting to apply a CIFilter to an AVAsset, and then save it with the filter applied. The way that I am doing this is by using an AVAssetExportSession
with videoComposition
set to an AVMutableVideoComposition
object with a custom AVVideoCompositing
class.
I am also setting the instructions of my AVMutableVideoComposition object to a custom composition instruction class (conforming to AVMutableVideoCompositionInstruction). This class is passed a track ID, along with a few other unimportant variables.
Unfortunately, I've run into a problem - the startVideoCompositionRequest: function in my custom video compositor class (conforming to AVVideoCompositing) is not being called correctly.
When I set the passthroughTrackID
variable of my custom instruction class to the track ID, the startVideoCompositionRequest(request) function in my AVVideoCompositing
is not called.
Yet, when I do not set the passthroughTrackID
variable of my custom instruction class, the startVideoCompositionRequest(request)
is called, but not correctly - printing request.sourceTrackIDs results in an empty array, and request.sourceFrameByTrackID(trackID) results in a nil value.
Something interesting that I found was that the cancelAllPendingVideoCompositionRequests: function is always called twice when attempting to export the video with filters. It is either called once before startVideoCompositionRequest:
and once after, or just twice in a row in the case that startVideoCompositionRequest:
is not called.
I've created three classes for exporting the video with filters. Here's the utility class, which basically just includes an export
function and calls all of the required code
class VideoFilterExport{
let asset: AVAsset
init(asset: AVAsset){
self.asset = asset
}
func export(toURL url: NSURL, callback: (url: NSURL?) -> Void){
guard let track: AVAssetTrack = self.asset.tracksWithMediaType(AVMediaTypeVideo).first else{callback(url: nil); return}
let composition = AVMutableComposition()
let compositionTrack = composition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)
do{
try compositionTrack.insertTimeRange(track.timeRange, ofTrack: track, atTime: kCMTimeZero)
}
catch _{callback(url: nil); return}
let videoComposition = AVMutableVideoComposition(propertiesOfAsset: composition)
videoComposition.customVideoCompositorClass = VideoFilterCompositor.self
videoComposition.frameDuration = CMTimeMake(1, 30)
videoComposition.renderSize = compositionTrack.naturalSize
let instruction = VideoFilterCompositionInstruction(trackID: compositionTrack.trackID)
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, self.asset.duration)
videoComposition.instructions = [instruction]
let session: AVAssetExportSession = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetMediumQuality)!
session.videoComposition = videoComposition
session.outputURL = url
session.outputFileType = AVFileTypeMPEG4
session.exportAsynchronouslyWithCompletionHandler(){
callback(url: url)
}
}
}
Here's the other two classes - I'll put them both into one code block to make this post shorter
// Video Filter Composition Instruction Class - from what I gather,
// AVVideoCompositionInstruction is used only to pass values to
// the AVVideoCompositing class
class VideoFilterCompositionInstruction : AVMutableVideoCompositionInstruction{
let trackID: CMPersistentTrackID
let filters: ImageFilterGroup
let context: CIContext
// When I leave this line as-is, startVideoCompositionRequest: isn't called.
// When commented out, startVideoCompositionRequest(request) is called, but there
// are no valid CVPixelBuffers provided by request.sourceFrameByTrackID(below value)
override var passthroughTrackID: CMPersistentTrackID{get{return self.trackID}}
override var requiredSourceTrackIDs: [NSValue]{get{return []}}
override var containsTweening: Bool{get{return false}}
init(trackID: CMPersistentTrackID, filters: ImageFilterGroup, context: CIContext){
self.trackID = trackID
self.filters = filters
self.context = context
super.init()
//self.timeRange = timeRange
self.enablePostProcessing = true
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
}
// My custom AVVideoCompositing class. This is where the problem lies -
// although I don't know if this is the root of the problem
class VideoFilterCompositor : NSObject, AVVideoCompositing{
var requiredPixelBufferAttributesForRenderContext: [String : AnyObject] = [
kCVPixelBufferPixelFormatTypeKey as String : NSNumber(unsignedInt: kCVPixelFormatType_32BGRA), // The video is in 32 BGRA
kCVPixelBufferOpenGLESCompatibilityKey as String : NSNumber(bool: true),
kCVPixelBufferOpenGLCompatibilityKey as String : NSNumber(bool: true)
]
var sourcePixelBufferAttributes: [String : AnyObject]? = [
kCVPixelBufferPixelFormatTypeKey as String : NSNumber(unsignedInt: kCVPixelFormatType_32BGRA),
kCVPixelBufferOpenGLESCompatibilityKey as String : NSNumber(bool: true),
kCVPixelBufferOpenGLCompatibilityKey as String : NSNumber(bool: true)
]
let renderQueue = dispatch_queue_create("co.getblix.videofiltercompositor.renderingqueue", DISPATCH_QUEUE_SERIAL)
override init(){
super.init()
}
func startVideoCompositionRequest(request: AVAsynchronousVideoCompositionRequest){
// This code block is never executed when the
// passthroughTrackID variable is in the above class
autoreleasepool(){
dispatch_async(self.renderQueue){
guard let instruction = request.videoCompositionInstruction as? VideoFilterCompositionInstruction else{
request.finishWithError(NSError(domain: "getblix.co", code: 760, userInfo: nil))
return
}
guard let pixels = request.sourceFrameByTrackID(instruction.passthroughTrackID) else{
// This code block is executed when I comment out the
// passthroughTrackID variable in the above class
request.finishWithError(NSError(domain: "getblix.co", code: 761, userInfo: nil))
return
}
// I have not been able to get the code to reach this point
// This function is either not called, or the guard
// statement above executes
let image = CIImage(CVPixelBuffer: pixels)
let filtered: CIImage = //apply the filter here
let width = CVPixelBufferGetWidth(pixels)
let height = CVPixelBufferGetHeight(pixels)
let format = CVPixelBufferGetPixelFormatType(pixels)
var newBuffer: CVPixelBuffer?
CVPixelBufferCreate(kCFAllocatorDefault, width, height, format, nil, &newBuffer)
if let buffer = newBuffer{
instruction.context.render(filtered, toCVPixelBuffer: buffer)
request.finishWithComposedVideoFrame(buffer)
}
else{
request.finishWithComposedVideoFrame(pixels)
}
}
}
}
func renderContextChanged(newRenderContext: AVVideoCompositionRenderContext){
// I don't have any code in this block
}
// This is interesting - this is called twice,
// Once before startVideoCompositionRequest is called,
// And once after. In the case when startVideoCompositionRequest
// Is not called, this is simply called twice in a row
func cancelAllPendingVideoCompositionRequests(){
dispatch_barrier_async(self.renderQueue){
print("Cancelled")
}
}
}
I've been looking at Apple's AVCustomEdit sample project a lot for guidance with this, but I can't seem to find in it any reason why this is happening.
How could I get the request.sourceFrameByTrackID: function to call correctly, and provide a valid CVPixelBuffer
for each frame?
回答1:
It turns out that the requiredSourceTrackIDs variable in the custom AVVideoCompositionInstruction class (VideoFilterCompositionInstruction
in the question) has to be set to an array containing the track IDs
override var requiredSourceTrackIDs: [NSValue]{
get{
return [
NSNumber(value: Int(self.trackID))
]
}
}
So the final custom composition instruction class is
class VideoFilterCompositionInstruction : AVMutableVideoCompositionInstruction{
let trackID: CMPersistentTrackID
let filters: [CIFilter]
let context: CIContext
override var passthroughTrackID: CMPersistentTrackID{get{return self.trackID}}
override var requiredSourceTrackIDs: [NSValue]{get{return [NSNumber(value: Int(self.trackID))]}}
override var containsTweening: Bool{get{return false}}
init(trackID: CMPersistentTrackID, filters: [CIFilter], context: CIContext){
self.trackID = trackID
self.filters = filters
self.context = context
super.init()
self.enablePostProcessing = true
}
required init?(coder aDecoder: NSCoder){
fatalError("init(coder:) has not been implemented")
}
}
All of the code for this utility is also on GitHub
回答2:
As you've noted, having passthroughTrackID
return the track you want to filter isn't the right approach — you need to return the track to be filtered from requiredSourceTrackIDs
instead. (And it looks like once you do that, it doesn't matter if you also return it from passthroughTrackID
.) To answer the remaining question of why it works this way...
The docs for passthroughTrackID
and requiredSourceTrackIDs
certainly aren't Apple's clearest writing ever. (File a bug about it and they might improve.) But if you look closely in the description of the former, there's a hint (emphasis added)...
If for the duration of the instruction, the video composition result is one of the source frames, this property returns the corresponding track ID. The compositor won't be run for the duration of the instruction and the proper source frame is used instead.
So, you use passthroughTrackID
only when you're making an instruction class that passes a single track through without processing.
If you plan to perform any image processing, even if it's just to a single track with no compositing, specify that track in requiredSourceTrackIDs
instead.
来源:https://stackoverflow.com/questions/39137099/custom-avvideocompositing-class-not-working-as-expected