问题
I'm planning to render content in a view on iOS using AV mutable composition. I want to combine the video coming from one of the iPhone cameras with content created in a layer - mutable composition seems to fit the bill here as it can composite layers into the video content.
It's not critical that the compositing be done as video is being recorded - I'm also happy to mix the required data into a composition that is then rendered (via AVExportSession) to a file after initial video recording has been completed.
What I don't get though is how a [ca]layer is supposed to know what to draw at a given time during the composition, in the context of the AV framework.
My layer content is dependent on a timeline, the timeline describes what needs to be drawn within the layer. So if I embed a layer into the mutable composition and then export that composition via AVExportSession - how will the CALayer instance know what time its supposed to produce content for?
回答1:
I've had similar thing going on. I would recommend you to check around the WWDC 2010 AVEditDemo application source. There is an example code there which does exactly what you need - placing a CALayer on top of a video track and also doing an animation on top of it.
You can also check my efforts on the subject at: Mix video with static image in CALayer using AVVideoCompositionCoreAnimationTool
来源:https://stackoverflow.com/questions/6206839/how-to-use-avmutablecomposition-and-calayers-on-ios