问题
I am developing an iOS app that must handle several stereo audio files (ranging from a few seconds to four minutes in duration) at once, playing up to three back simultaneously, synched through a Multi-Channel-Mixer-based AUGraph. My audio files are compressed – either as MP3, AAC or CAF – but when they are loaded into buffers are converted into the 32-bit AudioUnitSampleType
format (my code is based on Apple's iPhoneMultichannelMixerTest). Needless to say, with such large buffers, app memory very quickly becomes an issue.
Apple's documentation states that iOS devices can decode one MP3/AAC file in hardware at a time, recommending CAF (IMA4) format for other files needing decoding in software. Before I develop a system for dynamically loading audio during playback I want to know if it is possible to load the compressed files into buffers directly (thereby significantly reducing memory requirements) and having my AUGraph convert them on the fly?
回答1:
To answer my own question, if all one needs is precise playback-synchronisation of audio, without the custom processing possible using Audio Unit processing, then definitely look at AVFoundation's AVPlayer, AVAsset and AVComposition classes.
Whilst this method looks more suited to video playback, it works perfectly well using only audio assets. These classes handle all the loading and buffering of media assets (including CAF, AAC and MP3), using a tiny amount of memory in doing so, and can be tightly synchronised using the AVURLAssetPreferPreciseDurationAndTimingKey
. Also, it has useful callback methods that can be set to invoke at specific times during playback (for updating the UI, amongst other things). Finally, as it became available on Mac (10.7+) it is a good solution for universal Cocoa development.
来源:https://stackoverflow.com/questions/7273839/is-it-possible-to-load-a-compressed-audio-file-directly-into-a-buffer-without-c