I\'m working with AVFoundation for capturing and recording audio. There are some issues I don\'t quite understand.
Basically I want to capture audio from AVCaptureSessio
Use the CMSampleBufferGetPresentationTimeStamp, that is the time when the buffer is captured and should be "presented" at when played back to be in sync. To quote session 520 at WWDC 2012: "Presentation time is the time at which the first sample in the buffer was picked up by the microphone".
If you start the AVWriter with
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:CMSampleBufferGetPresentationTimeStamp(sampleBuffer)];
and then append samples with
if(videoWriterInput.readyForMoreMediaData) [videoWriterInput appendSampleBuffer:sampleBuffer];
the frames in the finished video will be consistent with CMSampleBufferGetPresentationTimeStamp (I have checked). If you want to modify the time when adding samples you have to use AVAssetWriterInputPixelBufferAdaptor