I\'ve being doing some reading up on core audio for ios 4 with the aim of building a little test app.
I\'m pretty confused at this point in reseach with all the api\'s.
There extendedaudiofile and audio file. These seem to be the one's for extracting audio. Which one should I use?
Neither of those would work if you are accessing audio files stored in the iPod library. You will have to use AVAssetReader. (Note: in the AVAssetReader documentation.. it states that AVAssetReader is not intended for use with real-time sources, and its performance is not guaranteed for real-time operations.
All I can say that it worked fine for me.. and I've created several real time applications using just AVAssetReader.. here is a sample.
Please see my answer here for more general tips on iOS audio programming as well.
Finally, the book learning core audio is obviously released by now. I strongly recommend you patiently go through the chapters and play with the sample code.. It's best you take your time with the examples and have the concepts sink in before you jump your more complex scenario.. copying and pasting sample code from the web and/or following high level advice of people on the web may work at the beginning, but you'll get into really hairy problems later on that no one else will help you fix.. trust me I learned the hard way!
The documentation on Core Audio has improved very much over the past years but it's still incomplete, sometimes confusing and sometimes just wrong. And I find the structure of the framework itself quite confusing (AudioToolbox, AudioUnit, CoreAudio, ... what is what?).
But my suggestions to tackle your task are this (Warning: I haven't done the following in iOS, only MacOS, but I think it's roughly the same):
Use ExtendedAudioFile (declared in the AudioToolbox framework) to read the mp3s. It does just what the name suggests, it extends the capabilities of AudioFile. I.e. you can assign a audio stream format (AudioStreamBasicDescription) to an eaf and when you read from it, it will convert into that format for you (for further processing with audio units you use the format ID 'kAudioFormatLinearPCM' and format flags 'kAudioFormatFlagsAudioUnitCanonical').
Then, you use ExtAudioFile's 'ExtAudioFileRead' to read the converted audio into an AudioBufferList struct which is a collection of AudioBuffer structs (both declared in the CoreAudio framework), one for each channel (so usually two). Check out the 'Core Audio Data Types Reference' in the Audio section of the Docs for things like AudioStreamBasicDescription, AudioBufferList and AudioBuffer.
Now, use audio units to playback and mix the files, it's not that hard. Audio units seem this 'big thing' but they really aren't. Look into 'AudioUnitProperties.h' and 'AUComponent.h' (in the AudioUnit framework) for descriptions of available audio units. Check out 'Audio Unit Hosting Guide for iOS' in the docs. The only problem here is that there is no audio file player unit for iOS... If I remember correctly, you have to feed your audio units with samples manually.
Audio units live in an AUGraph (declared in the AudioToolbox framework) and are interconnected like audio hardware thru a patchbay. The graph also handles the audio output for you. You can check out the 'PlaySoftMIDI' and 'MixerHost' example code regarding this (actually, I just had a look into MixerHost again and I think, it's just what you want to do!).
A rule of thumb: Look into the header files! They yield more complete and precise information than the docs, at least that was my impression. It can help a lot to look at the headers of the above mentioned frameworks and try to get familiar with them.
Also, there will be a book about Core Audio ('Core Audio' by Kevin Avila and Chris Adamson), but it's not yet released.
Hope, all this helps a little! Good luck, Sebastian