问题
I am new to Audio framework, anyone help me to write the audio file which is playing by capturing from microphone?
below is the code to play mic input through iphone speaker, now i would like to save the audio in iphone for future use.
i found the code from here to record audio using microphone http://www.stefanpopp.de/2011/capture-iphone-microphone/
/**
Code start from here for playing the recorded voice
*/
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
/**
This is the reference to the object who owns the callback.
*/
AudioProcessor *audioProcessor = (AudioProcessor*) inRefCon;
// iterate over incoming stream an copy to output stream
for (int i=0; i < ioData->mNumberBuffers; i++) {
AudioBuffer buffer = ioData->mBuffers[i];
// find minimum size
UInt32 size = min(buffer.mDataByteSize, [audioProcessor audioBuffer].mDataByteSize);
// copy buffer to audio buffer which gets played after function return
memcpy(buffer.mData, [audioProcessor audioBuffer].mData, size);
// set data size
buffer.mDataByteSize = size;
// get a pointer to the recorder struct variable
Recorder recInfo = audioProcessor.audioRecorder;
// write the bytes
OSStatus audioErr = noErr;
if (recInfo.running) {
audioErr = AudioFileWriteBytes (recInfo.recordFile,
false,
recInfo.inStartingByte,
&size,
&buffer.mData);
assert (audioErr == noErr);
// increment our byte count
recInfo.inStartingByte += (SInt64)size;// size should be number of bytes
audioProcessor.audioRecorder = recInfo;
}
}
return noErr;
}
-(void)prepareAudioFileToRecord{
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask, YES);
NSString *basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : nil;
NSTimeInterval time = ([[NSDate date] timeIntervalSince1970]); // returned as a double
long digits = (long)time; // this is the first 10 digits
int decimalDigits = (int)(fmod(time, 1) * 1000); // this will get the 3 missing digits
// long timestamp = (digits * 1000) + decimalDigits;
NSString *timeStampValue = [NSString stringWithFormat:@"%ld",digits];
// NSString *timeStampValue = [NSString stringWithFormat:@"%ld.%d",digits ,decimalDigits];
NSString *fileName = [NSString stringWithFormat:@"test%@.caf",timeStampValue];
NSString *filePath = [basePath stringByAppendingPathComponent:fileName];
NSURL *fileURL = [NSURL fileURLWithPath:filePath];
// modify the ASBD (see EDIT: towards the end of this post!)
audioFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
// set up the file (bridge cast will differ if using ARC)
OSStatus audioErr = noErr;
audioErr = AudioFileCreateWithURL((CFURLRef)fileURL,
kAudioFileCAFType,
&audioFormat,
kAudioFileFlags_EraseFile,
&audioRecorder.recordFile);
assert (audioErr == noErr);// simple error checking
audioRecorder.inStartingByte = 0;
audioRecorder.running = true;
self.audioRecorder = audioRecorder;
}
thanks in advance bala
回答1:
To write the bytes from an AudioBuffer to a file locally we need the help from the AudioFileServices link class which is included in the AudioToolbox framework.
Conceptually we will do the following - set up an audio file and maintain a reference to it (we need this reference to be accessible from the render callback that you included in your post). We also need to keep track of the number of bytes that are written for each time the callback is called. Finally a flag to check that will let us know to stop writing to file and close the file.
Because the code in the link you provided declares an AudioStreamBasicDescription which is LPCM and hence constant bit rate, we can use the AudioFileWriteBytes function (writing compressed audio is more involved and would use AudioFileWritePackets function instead).
Let's start by declaring a custom struct (which contains all the extra data we'll need) and adding an instance variable of this custom struct and also making a property that points to the struct variable. We'll add this to the AudioProcessor custom class, as you already have access to this object from within the callback where you typecast in this line.
AudioProcessor *audioProcessor = (AudioProcessor*) inRefCon;
Add this to AudioProcessor.h (above the @interface)
typedef struct Recorder {
AudioFileID recordFile;
SInt64 inStartingByte;
Boolean running;
} Recorder;
Now let's add an instance variable and also make it a pointer property and assign it to the instance variable (so we can access it from within the callback function). In the @interface add an instance variable named audioRecorder and also make the ASBD available to the class.
Recorder audioRecorder;
AudioStreamBasicDescription recordFormat;// assign this ivar to where the asbd is created in the class
In the method -(void)initializeAudio comment out or delete this line as we have made recordFormat an ivar.
//AudioStreamBasicDescription recordFormat;
Now add the kAudioFormatFlagIsBigEndian format flag to where the ASBD is set up.
// also modify the ASBD in the AudioProcessor classes -(void)initializeAudio method (see EDIT: towards the end of this post!)
recordFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
And finally add it as a property that is a pointer to the audioRecorder instance variable and don't forget to synthesise it in AudioProcessor.m. We will name the pointer property audioRecorderPointer
@property Recorder *audioRecorderPointer;
// in .m synthesise the property
@synthesize audioRecorderPointer;
Now let's assign the pointer to the ivar (this could be placed in the -(void)initializeAudio method of the AudioProcessor class)
// ASSIGN POINTER PROPERTY TO IVAR
self.audioRecorderPointer = &audioRecorder;
Now in the AudioProcessor.m let's add a method to setup the file and open it so we can write to it. This should be called before you start the AUGraph running.
-(void)prepareAudioFileToRecord {
// lets set up a test file in the documents directory
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask, YES);
NSString *basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : nil;
NSString *fileName = @"test_recording.aif";
NSString *filePath = [basePath stringByAppendingPathComponent:fileName];
NSURL *fileURL = [NSURL fileURLWithPath:filePath];
// set up the file (bridge cast will differ if using ARC)
OSStatus audioErr = noErr;
audioErr = AudioFileCreateWithURL((CFURLRef)fileURL,
kAudioFileAIFFType,
recordFormat,
kAudioFileFlags_EraseFile,
&audioRecorder.recordFile);
assert (audioErr == noErr);// simple error checking
audioRecorder.inStartingByte = 0;
audioRecorder.running = true;
}
Okay, we are nearly there. Now we have a file to write to, and an AudioFileID that can be accessed from the render callback. So inside the callback function you posted add the following right before you return noErr at the end of the method.
// get a pointer to the recorder struct instance variable
Recorder *recInfo = audioProcessor.audioRecorderPointer;
// write the bytes
OSStatus audioErr = noErr;
if (recInfo->running) {
audioErr = AudioFileWriteBytes (recInfo->recordFile,
false,
recInfo->inStartingByte,
&size,
buffer.mData);
assert (audioErr == noErr);
// increment our byte count
recInfo->inStartingByte += (SInt64)size;// size should be number of bytes
}
When we want to stop recording (probably invoked by some user action), simply make the running boolean false and close the file like this somewhere in the AudioProcessor class.
audioRecorder.running = false;
OSStatus audioErr = AudioFileClose(audioRecorder.recordFile);
assert (audioErr == noErr);
EDIT: the endianness of the samples need to be big endian for the file so add the kAudioFormatFlagIsBigEndian bit mask flag to the ASBD in the source code found at the link provided in question.
For extra info about this topic the Apple documents are a great resource and I also recommend reading 'Learning Core Audio' by Chris Adamson and Kevin Avila (of which I own a copy).
回答2:
Use Audio Queue Services.
There is an example in the Apple documentation that does exactly what you ask:
Audio Queue Services Programming Guide - Recording Audio
来源:https://stackoverflow.com/questions/20043725/how-to-write-audio-file-locally-recorded-from-microphone-using-audiobuffer-in-ip