Playing Audio on iOS from Socket connection

孤人 提交于 2020-01-23 03:44:06

问题


Hope you can help me with this issue, I have seen a lot of questions related to this, but none of them really helps me to figure out what I am doing wrong here.

So on Android I have an AudioRecord which is recording audio and sending the audio as byte array over a socket connection to clients. This part was super easy on Android and is working perfectly.

When I started working with iOS I found out there is no easy way to go about this, so after 2 days of research and plugging and playing this is what I have got. Which still does not play any audio. It makes a noise when it starts but none of the audio being transferred over the socket is being played. I confirmed that the socket is receiving data by logging each element in the buffer array.

Here is all the code I am using, a lot is reused from a bunch of sites, can't remember all the links. (BTW using AudioUnits)

First up, audio processor: Play Callback

static OSStatus playbackCallback(void *inRefCon,
                                 AudioUnitRenderActionFlags *ioActionFlags,
                                 const AudioTimeStamp *inTimeStamp,
                                 UInt32 inBusNumber,
                                 UInt32 inNumberFrames,
                                 AudioBufferList *ioData) {

    /**
     This is the reference to the object who owns the callback.
     */
    AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon;

    // iterate over incoming stream an copy to output stream
    for (int i=0; i < ioData->mNumberBuffers; i++) {
        AudioBuffer buffer = ioData->mBuffers[i];

        // find minimum size
        UInt32 size = min(buffer.mDataByteSize, [audioProcessor audioBuffer].mDataByteSize);

        // copy buffer to audio buffer which gets played after function return
        memcpy(buffer.mData, [audioProcessor audioBuffer].mData, size);

        // set data size
        buffer.mDataByteSize = size;
    }
    return noErr;
}

Audio processor initialize

-(void)initializeAudio
{
    OSStatus status;

    // We define the audio component
    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output; // we want to ouput
    desc.componentSubType = kAudioUnitSubType_RemoteIO; // we want in and ouput
    desc.componentFlags = 0; // must be zero
    desc.componentFlagsMask = 0; // must be zero
    desc.componentManufacturer = kAudioUnitManufacturer_Apple; // select provider

    // find the AU component by description
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);

    // create audio unit by component
    status = AudioComponentInstanceNew(inputComponent, &audioUnit);

    [self hasError:status:__FILE__:__LINE__];

    // define that we want record io on the input bus
    UInt32 flag = 1;


    // define that we want play on io on the output bus
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioOutputUnitProperty_EnableIO, // use io
                                  kAudioUnitScope_Output, // scope to output
                                  kOutputBus, // select output bus (0)
                                  &flag, // set flag
                                  sizeof(flag));
    [self hasError:status:__FILE__:__LINE__];

    /*
     We need to specifie our format on which we want to work.
     We use Linear PCM cause its uncompressed and we work on raw data.
     for more informations check.

     We want 16 bits, 2 bytes per packet/frames at 44khz
     */
    AudioStreamBasicDescription audioFormat;
    audioFormat.mSampleRate         = SAMPLE_RATE;
    audioFormat.mFormatID           = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags        = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
    audioFormat.mFramesPerPacket    = 1;
    audioFormat.mChannelsPerFrame   = 1;
    audioFormat.mBitsPerChannel     = 16;
    audioFormat.mBytesPerPacket     = 2;
    audioFormat.mBytesPerFrame      = 2;

    // set the format on the output stream
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_StreamFormat,
                                  kAudioUnitScope_Output,
                                  kInputBus,
                                  &audioFormat,
                                  sizeof(audioFormat));

    [self hasError:status:__FILE__:__LINE__];



    /**
     We need to define a callback structure which holds
     a pointer to the recordingCallback and a reference to
     the audio processor object
     */
    AURenderCallbackStruct callbackStruct;

    /*
     We do the same on the output stream to hear what is coming
     from the input stream
     */
    callbackStruct.inputProc = playbackCallback;
    callbackStruct.inputProcRefCon = (__bridge void *)(self);

    // set playbackCallback as callback on our renderer for the output bus
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_SetRenderCallback,
                                  kAudioUnitScope_Global,
                                  kOutputBus,
                                  &callbackStruct,
                                  sizeof(callbackStruct));

    [self hasError:status:__FILE__:__LINE__];

    // reset flag to 0
    flag = 0;

    /*
     we need to tell the audio unit to allocate the render buffer,
     that we can directly write into it.
     */
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_ShouldAllocateBuffer,
                                  kAudioUnitScope_Output,
                                  kInputBus,
                                  &flag,
                                  sizeof(flag));

    /*
     we set the number of channels to mono and allocate our block size to
     1024 bytes.
     */
    audioBuffer.mNumberChannels = 1;
    audioBuffer.mDataByteSize = 512 * 2;
    audioBuffer.mData = malloc( 512 * 2 );

    // Initialize the Audio Unit and cross fingers =)
    status = AudioUnitInitialize(audioUnit);
    [self hasError:status:__FILE__:__LINE__];

    NSLog(@"Started");

}

Start Playing

-(void)start;
{
    // start the audio unit. You should hear something, hopefully <img src="http://www.stefanpopp.de/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley">
    OSStatus status = AudioOutputUnitStart(audioUnit);
    [self hasError:status:__FILE__:__LINE__];
}

Adding data to the buffer

-(void)processBuffer: (AudioBufferList*) audioBufferList
{
    AudioBuffer sourceBuffer = audioBufferList->mBuffers[0];

    // we check here if the input data byte size has changed
    if (audioBuffer.mDataByteSize != sourceBuffer.mDataByteSize) {
        // clear old buffer
        free(audioBuffer.mData);
        // assing new byte size and allocate them on mData
        audioBuffer.mDataByteSize = sourceBuffer.mDataByteSize;
        audioBuffer.mData = malloc(sourceBuffer.mDataByteSize);
    }
    // loop over every packet
    // copy incoming audio data to the audio buffer
    memcpy(audioBuffer.mData, audioBufferList->mBuffers[0].mData, audioBufferList->mBuffers[0].mDataByteSize);
}

Stream connection callback (Socket)

-(void)stream:(NSStream *)aStream handleEvent:(NSStreamEvent)eventCode
{
    if(eventCode == NSStreamEventHasBytesAvailable)
    {
        if(aStream == inputStream) {
            uint8_t buffer[1024];
            UInt32 len;
            while ([inputStream hasBytesAvailable]) {
                len = (UInt32)[inputStream read:buffer maxLength:sizeof(buffer)];
                if(len > 0)
                {
                    AudioBuffer abuffer;

                    abuffer.mDataByteSize = len; // sample size
                    abuffer.mNumberChannels = 1; // one channel
                    abuffer.mData = buffer;

                    int16_t audioBuffer[len];

                    for(int i = 0; i <= len; i++)
                    {
                        audioBuffer[i] = MuLaw_Decode(buffer[i]);
                    }

                    AudioBufferList bufferList;
                    bufferList.mNumberBuffers = 1;
                    bufferList.mBuffers[0] = abuffer;

                    NSLog(@"%", bufferList.mBuffers[0]);

                    [audioProcessor processBuffer:&bufferList];
                }
            }
        }
    }
}

The MuLaw_Decode

#define MULAW_BIAS 33
int16_t MuLaw_Decode(uint8_t number)
{
    uint8_t sign = 0, position = 0;
    int16_t decoded = 0;
    number =~ number;
    if(number&0x80)
    {
        number&=~(1<<7);
        sign = -1;
    }
    position= ((number & 0xF0) >> 4) + 5;
    decoded = ((1<<position) | ((number&0x0F) << (position - 4)) |(1<<(position-5))) - MULAW_BIAS;
    return (sign == 0) ? decoded : (-(decoded));
}

And the code that opens the connection and initialises the audio processor

CFReadStreamRef readStream;
CFWriteStreamRef writeStream;



CFStreamCreatePairWithSocketToHost(NULL, (CFStringRef)@"10.0.0.14", 6000, &readStream, &writeStream);


inputStream = (__bridge_transfer NSInputStream *)readStream;
outputStream = (__bridge_transfer NSOutputStream *)writeStream;

[inputStream setDelegate:self];
[outputStream setDelegate:self];

[inputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
[outputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
[inputStream open];
[outputStream open];


audioProcessor = [[AudioProcessor alloc] init];
[audioProcessor start];
[audioProcessor setGain:1];

I believe the issue in my code is with the socket connection callback, that I am not doing the right thing with the data.


回答1:


I solved this in the end, see my answer here

I intended putting the code here, but it would be a lot of copy pasting



来源:https://stackoverflow.com/questions/28712887/playing-audio-on-ios-from-socket-connection

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!