问题
I plan to refactor my recording system in My iOS app. Context: Up to now, I record video and audio separately, starting recording both approximatly at same time. Once record is finished, same system, I play the video and audio separately, applying AudioUnits on the fly on audio. Finally, I merge the video and modified audio. It happens that both records don't start at the same time (for any reasons), producing an unsynchronized result.
Would it be possible to refactor my system like this:
1) Record normal video with audio into mov file --> I would be sure that audio+video would be synchronized.
2) During viewing the result with AVPlayer, process the audio part on the fly. (I will use AudioKit) --> that's the part I m not confident.
Would I be able to send the audio buffer to Audiokit (which would process it) and give back the processed audio to AVPlayer like if it was the original AVPlayer audio part?
3) Save a final file with video and audio modified --> easy part with AVFundation
Please, ask for any information ;)
回答1:
I can think of one fairly simple way to do this.
Basically you just need to open your video file in an AKPlayer instance. Then, you mute your video audio. Now, you have the video audio in AudioKit. It's pretty simple to lock the video and audio together using a common clock. Pseudo-code of the flow:
// This will represent a common clock using the host time
var audioClock = CMClockGetHostTimeClock()
// your video player
let videoPlayer = AVPlayer( url: videoURL )
videoPlayer.masterClock = audioClock
videoPlayer.automaticallyWaitsToMinimizeStalling = false
....
var audioPlayer: AKPlayer?
// your video-audio player
if let player = try? AKPlayer(url: videoURL) {
audioPlayer = player
}
func schedulePlayback(videoTime: TimeInterval, audioTime: TimeInterval, hostTime: UInt64 ) {
audioPlay( audioTime, hostTime: hostTime )
videoPlay(at: 0, hostTime: hostTime)
}
func audioPlay(at time: TimeInterval = 0, hostTime: UInt64 = 0) {
audioPlayer.play(when: time, hostTime: hostTime)
}
func videoPlay(at time: TimeInterval = 0, hostTime: UInt64 = 0 ) {
let cmHostTime = CMClockMakeHostTimeFromSystemUnits(hostTime)
let cmVTime = CMTimeMakeWithSeconds(time, 1000000)
let futureTime = CMTimeAdd(cmHostTime, cmVTime)
videoPlayer.setRate(1, time: kCMTimeInvalid, atHostTime: futureTime)
}
You can connect the player up to any AudioKit processing chain in the normal way.
When you want to export your audio, run an AKNodeRecorder on the final output processing chain. Record this to file, then merge your audio into your video. I'm not sure if the AudioKit offline processing that is being worked on is ready yet, so you may need to play the audio in real time to capture the processing output.
来源:https://stackoverflow.com/questions/47475499/ios-process-audio-from-avplayer-video-track