I\'m generating a video from a Unity app on iOS. I\'m using iVidCap, which uses AVFoundation to do this. That side is all working fine. Essentially the video is rendered by
CMAudioSampleBufferCreateWithPacketDescriptions(kCFAllocatorDefault, bbuf, TRUE, 0, NULL, audio_fmt_desc_, 1, timestamp, NULL, &sbuf);
should be
CMAudioSampleBufferCreateWithPacketDescriptions(kCFAllocatorDefault, bbuf, TRUE, 0, NULL, audio_fmt_desc_, n, timestamp, NULL, &sbuf);i made it.
It looks ok, although I would use CMBlockBufferCreateWithMemoryBlock
because it copies the samples. Is your code ok with not knowing when audioWriterInput has finished with them?
Shouldn't kAudioFormatFlagIsAlignedHigh
be kAudioFormatFlagIsPacked
?