iOS - AudioUnitRender returned error -10876 on device, but running fine in simulator

六月ゝ 毕业季﹏ 提交于 2021-02-09 11:10:59

问题


I encountered a problem which made me unable to capture input signal from microphone on the device (iPhone4). However, the code runs fine in the simulator. The code was originally adopted from Apple's MixerHostAudio class from MixerHost sample code. it runs fine both on device and in simulator before I started adding code for capturing mic input. Wondering if somebody could help me out. Thanks in advance!

Here is my inputRenderCallback function which feeds signal into mixer input:

static OSStatus inputRenderCallback (

void                        *inRefCon,
AudioUnitRenderActionFlags  *ioActionFlags,
const AudioTimeStamp        *inTimeStamp, 
UInt32                      inBusNumber,
UInt32                      inNumberFrames,
AudioBufferList             *ioData) {
recorderStructPtr recorderStructPointer     = (recorderStructPtr) inRefCon;
    // ....
        AudioUnitRenderActionFlags renderActionFlags;
        err = AudioUnitRender(recorderStructPointer->iOUnit, 
                              &renderActionFlags, 
                              inTimeStamp, 
                              1, // bus number for input
                              inNumberFrames, 
                              recorderStructPointer->fInputAudioBuffer
                              );
                    // error returned is -10876
    // ....
}

Here is my related initialization code: Now I keep only 1 input in the mixer, so the mixer seems redundant, but works fine before adding input capture code.

// Convenience function to allocate our audio buffers
- (AudioBufferList *) allocateAudioBufferListByNumChannels:(UInt32)numChannels withSize:(UInt32)size {
    AudioBufferList*            list;
    UInt32                      i;

    list = (AudioBufferList*)calloc(1, sizeof(AudioBufferList) + numChannels * sizeof(AudioBuffer));
    if(list == NULL)
        return nil;

    list->mNumberBuffers = numChannels;
    for(i = 0; i < numChannels; ++i) {
        list->mBuffers[i].mNumberChannels = 1;
        list->mBuffers[i].mDataByteSize = size;
        list->mBuffers[i].mData = malloc(size);
        if(list->mBuffers[i].mData == NULL) {
            [self destroyAudioBufferList:list];
            return nil;
        }
    }
    return list;
}

// initialize audio buffer list for input capture
recorderStructInstance.fInputAudioBuffer = [self allocateAudioBufferListByNumChannels:1 withSize:4096];

// I/O unit description
AudioComponentDescription iOUnitDescription;
iOUnitDescription.componentType          = kAudioUnitType_Output;
iOUnitDescription.componentSubType       = kAudioUnitSubType_RemoteIO;
iOUnitDescription.componentManufacturer  = kAudioUnitManufacturer_Apple;
iOUnitDescription.componentFlags         = 0;
iOUnitDescription.componentFlagsMask     = 0;

// Multichannel mixer unit description
AudioComponentDescription MixerUnitDescription;
MixerUnitDescription.componentType          = kAudioUnitType_Mixer;
MixerUnitDescription.componentSubType       = kAudioUnitSubType_MultiChannelMixer;
MixerUnitDescription.componentManufacturer  = kAudioUnitManufacturer_Apple;
MixerUnitDescription.componentFlags         = 0;
MixerUnitDescription.componentFlagsMask     = 0;

AUNode   iONode;         // node for I/O unit
AUNode   mixerNode;      // node for Multichannel Mixer unit

// Add the nodes to the audio processing graph
result =    AUGraphAddNode (
                processingGraph,
                &iOUnitDescription,
                &iONode);

result =    AUGraphAddNode (
                processingGraph,
                &MixerUnitDescription,
                &mixerNode
            );

result = AUGraphOpen (processingGraph);

// fetch mixer AudioUnit instance
result =    AUGraphNodeInfo (
                processingGraph,
                mixerNode,
                NULL,
                &mixerUnit
            );

// fetch RemoteIO AudioUnit instance
result =    AUGraphNodeInfo (
                             processingGraph,
                             iONode,
                             NULL,
                             &(recorderStructInstance.iOUnit)
                             );


    // enable input of RemoteIO unit
UInt32 enableInput = 1;
AudioUnitElement inputBus = 1;
result = AudioUnitSetProperty(recorderStructInstance.iOUnit, 
                              kAudioOutputUnitProperty_EnableIO, 
                              kAudioUnitScope_Input, 
                              inputBus, 
                              &enableInput, 
                              sizeof(enableInput)
                              );
// setup mixer inputs
UInt32 busCount   = 1;

result = AudioUnitSetProperty (
             mixerUnit,
             kAudioUnitProperty_ElementCount,
             kAudioUnitScope_Input,
             0,
             &busCount,
             sizeof (busCount)
         );


UInt32 maximumFramesPerSlice = 4096;

result = AudioUnitSetProperty (
             mixerUnit,
             kAudioUnitProperty_MaximumFramesPerSlice,
             kAudioUnitScope_Global,
             0,
             &maximumFramesPerSlice,
             sizeof (maximumFramesPerSlice)
         );


for (UInt16 busNumber = 0; busNumber < busCount; ++busNumber) {

    // set up input callback
    AURenderCallbackStruct inputCallbackStruct;
    inputCallbackStruct.inputProc        = &inputRenderCallback;
inputCallbackStruct.inputProcRefCon  = &recorderStructInstance;

    result = AUGraphSetNodeInputCallback (
                 processingGraph,
                 mixerNode,
                 busNumber,
                 &inputCallbackStruct
             );

            // set up stream format
    AudioStreamBasicDescription mixerBusStreamFormat;
    size_t bytesPerSample = sizeof (AudioUnitSampleType);

    mixerBusStreamFormat.mFormatID          = kAudioFormatLinearPCM;
    mixerBusStreamFormat.mFormatFlags       = kAudioFormatFlagsAudioUnitCanonical;
    mixerBusStreamFormat.mBytesPerPacket    = bytesPerSample;
    mixerBusStreamFormat.mFramesPerPacket   = 1;
    mixerBusStreamFormat.mBytesPerFrame     = bytesPerSample;
    mixerBusStreamFormat.mChannelsPerFrame  = 2;
    mixerBusStreamFormat.mBitsPerChannel    = 8 * bytesPerSample;
    mixerBusStreamFormat.mSampleRate        = graphSampleRate;

    result = AudioUnitSetProperty (
                                   mixerUnit,
                                   kAudioUnitProperty_StreamFormat,
                                   kAudioUnitScope_Input,
                                   busNumber,
                                   &mixerBusStreamFormat,
                                   sizeof (mixerBusStreamFormat)
                                   );


}

// set sample rate of mixer output
result = AudioUnitSetProperty (
             mixerUnit,
             kAudioUnitProperty_SampleRate,
             kAudioUnitScope_Output,
             0,
             &graphSampleRate,
             sizeof (graphSampleRate)
         );


// connect mixer output to RemoteIO
result = AUGraphConnectNodeInput (
             processingGraph,
             mixerNode,         // source node
             0,                 // source node output bus number
             iONode,            // destination node
             0                  // desintation node input bus number
         );


// initialize AudioGraph
result = AUGraphInitialize (processingGraph);

// start AudioGraph
result = AUGraphStart (processingGraph);

// enable mixer input
result = AudioUnitSetParameter (
                     mixerUnit,
                     kMultiChannelMixerParam_Enable,
                     kAudioUnitScope_Input,
                     0, // bus number
                     1, // on
                     0
                  );

回答1:


First, it should be noted that the error code -10876 corresponds to the symbol named kAudioUnitErr_NoConnection. You can usually find these by googling the error code number along with the term CoreAudio. That should be a hint that you are asking the system to render to an AudioUnit which isn't properly connected.

Within your render callback, you are casting the void* user data to a recorderStructPtr. I'm going to assume that when you debugged this code that this cast returned a non-null structure which has your actual audio unit's address in it. However, you should be rendering it with the AudioBufferList which is passed in to your render callback (ie, the inputRenderCallback function). That contains the list of samples from the system which you need to process.




回答2:


I solved the issue on my own. It is due to a bug in my code causing 10876 error on AudioUnitRender().

I set the category of my AudioSession as AVAudioSessionCategoryPlayback instead of AVAudioSessionCategoryPlayAndRecord. When I fixed the category to AVAudioSessionCategoryPlayAndRecord, I can finally capture microphone input successfully by calling A*udioUnitRender()* on the device.

using AVAudioSessionCategoryPlayback doesn't result to any error upon calling AudioUnitRender() to capture microphone input and is working well in the simulator. I think this should be an issue for iOS simulator (though not critical).




回答3:


I have also seen this issue occur when the values in the I/O Unit's stream format property are inconsistent. Make sure that your AudioStreamBasicDescription's bits per channel, channels per frame, bytes per frame, frames per packet, and bytes per packet all make sense.

Specifically I got the NoConnection error when I changed a stream format from stereo to mono by changing the channels per frame, but forgot to change the bytes per frame and bytes per packet to match the fact that there is half as much data in a mono frame as a stereo frame.




回答4:


If you initialise an AudioUnit and don't set its kAudioUnitProperty_SetRenderCallback property, you'll get this error if you call AudioUnitRender on it.

Call AudioUnitProcess on it instead.



来源:https://stackoverflow.com/questions/5502170/ios-audiounitrender-returned-error-10876-on-device-but-running-fine-in-simul

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!