I wrote a loop to encode pcm audio data generated by my app to aac using Extended Audio File Services. The encoding takes place in a background thread synchronously, and not
I had a very similar problem where I was attempting to use Extended Audio File Services in order to stream PCM sound into an m4a file on an iPad 2. Everything appeared to work except that every call to ExtAudioFileWrite returned the error code -66567 (kExtAudioFileError_MaxPacketSizeUnknown). The fix I eventually found was to set the "Codec Manufacturer" to software instead of hardware. So place
UInt32 codecManf = kAppleSoftwareAudioCodecManufacturer;
ExtAudioFileSetProperty(FileToWrite, kExtAudioFileProperty_CodecManufacturer, sizeof(UInt32), &codecManf);
just before you set the client data format.
This would lead me to believe that Apple's hardware codecs can only support very specific encoding, but the software codecs can more reliably do what you want. In my case, the software codec translation to m4a takes 50% longer than writing the exact same file to LPCM format.
Does anyone know whether Apple specifies somewhere what their audio codec hardware is capable of? It seems that software engineers are stuck playing the hours-long guessing game of setting the ~20 parameters in the AudioStreamBasicDescription and AudioChannelLayout for the client and for the file to every possible permutation until something works...