I\'m recording audio on an iPhone, using an AVAudioRecorder
with the following settings:
NSMutableDictionary *recordSettings = [[NSDictionary alloc]
I found a way that was much faster to implement:
Use AVAudioRecorder and use the extension "m4a" for a temporary file, you can however also use "caf" if you want but it's unnecessary.
Modify the code here to use AVAssetExportPresetPassthrough and exportSession.outputFileType = AVFileTypeQuickTimeMovie and a filename "audioJoined.mov". Use your newly recorded temporary m4a and an existing m4a file. This gives you an instant join (no recompression) and produces a "mov".
Note. Unfortunately the AVAudioPlayer cannot play a "mov" so the next step is to convert it to something playable. However, if you are just going to share the file somewhere you could potentially skip the next step since the mov is perfectly playable on a Mac in Quicktime. It also can be played in iTunes and synced back to an iPhone and plays in the iPod app.
I'm using this technique in an app that is able to resume recording after recording has been stopped and the file has been played, or even if the app is restarted, pretty cool.
Though we ask the AVAudioRecorder
to record in MPEG4-AAC format, it always produces a .caf (Core Audio Format) file. This is just a wrapper format, however, and the actual audio data it contains is in AAC format.
In the end, appending files came down to manipulating the .caf files byte-by-byte. The spec for Core Audio Format files is here. Digesting this document and processing the files accordingly was a little off-putting at first, but it turns out the spec is very clear and complete, so it wasn't too onerous.
As the spec explains, .caf files consist of chunks with four-byte names at the beginning. For AAC files, there's always a desc
chunk and a kuki
chunk. As we know our two original files are in the same format, we can copy these chunks unchanged to the output file.
There's also a pakt
chunk and a data
chunk. We can't guarantee which order these will be in within the input files. There may or may not be a free
chunk - but this just contains padding 0x00's, so we needn't copy this to the output file.
To combine the pakt
chunks, we need to examine the chunk headers and produce a new pakt
chunk whose mNumberPackets
and mNumberValidFrames
fields are the sums of those in the input files. The mPrimingFrames
and mRemainderFrames
are always zero - these are only relevant for streaming media. The bulk of the pakt
chunks (ie. the actual packet table data) can just be concatenated.
Similarly for the data
chunks: the mChunkSize
fields need to be summed and then the bulk of the data can be concatenated.
Be careful when reading data from all the binary numeric fields within these files: the files are big-endian but the iPhone is little-endian.
For extra credit, you might also like to consider deleting segments of audio from within a file, or inserting one audio file into the middle of another. This is a little trickier as you have to parse the contents of the pakt
chunk. Again it's a case of following the spec: there's a good description of how the packet sizes are stored in variable-length integers, so you'll have to parse these to find how many bytes each packet takes up in the data
chunk, and calculate their positions accordingly.
All in all this is rather more hassle than I was hoping for. Maybe there's an open source library that will do all this for you, but I couldn't find one.
However, handling raw files like this is blinding fast compared to using AVMutableComposition
and AVMutableCompositionTrack
as in the original question - inserting an hour-long recording into another of the same length takes about two seconds.
Good luck!