OSX CoreAudio: Getting inNumberFrames in advance - on initialization?

柔情痞子 提交于 2019-12-24 02:08:08

问题


I'm experimenting with writing a simplistic single-AU play-through based, (almost)-no-latency tracking phase vocoder prototype in C. It's a standalone program. I want to find how much processing load can a single render callback safely bear, so I prefer keeping off async DSP.

My concept is to have only one pre-determined value which is window step, or hop size or decimation factor (different names for same term used in different literature sources). This number would equal inNumberFrames, which somehow depends on the device sampling rate (and what else?). All other parameters, such as window size and FFT size would be set in relation to the window step. This seems the simplest method for keeipng everything inside one callback.

Is there a safe method to machine-independently and safely guess or query which could be the inNumberFrames before the actual rendering starts, i.e. before calling AudioOutputUnitStart()?

The phase vocoder algorithm is mostly standard and very simple, using vDSP functions for FFT, plus custom phase integration and I have no problems with it.

Additional debugging info

This code is monitoring timings within the input callback:

static Float64 prev_stime;  //prev. sample time
static UInt64  prev_htime;  //prev. host time        
printf("inBus: %d\tframes: %d\tHtime: %lld\tStime: %7.2lf\n",
       (unsigned int)inBusNumber,
       (unsigned int)inNumberFrames,
       inTimeStamp->mHostTime   - prev_htime,
       inTimeStamp->mSampleTime - prev_stime);
prev_htime = inTimeStamp->mHostTime;
prev_stime = inTimeStamp->mSampleTime;

Curious enough, the argument inTimeStamp->mSampleTime actually shows the number of rendered frames (name of the argument seems somewhat misguiding). This number is always 512, no matter if another sampling rate has been re-set through AudioMIDISetup.app at runtime, as if the value had been programmatically hard-coded. On one hand, the

inTimeStamp->mHostTime - prev_htime

interval gets dynamically changed depending on the sampling rate set in a mathematically clear way. As long as sampling rate values match multiples of 44100Hz, actual rendering is going on. On the other hand 48kHz multiples produce the rendering error -10863 ( = kAudioUnitErr_CannotDoInCurrentContext ). I must have missed a very important point.


回答1:


The number of frames is usually the sample rate times the buffer duration. There is an Audio Unit API to request a sample rate and a preferred buffer duration (such as 44100 and 5.8 mS resulting in 256 frames), but not all hardware on all OS versions honors all requested buffer durations or sample rates.




回答2:


Assuming audioUnit is an input audio unit:

UInt32 inNumberFrames = 0;
UInt32 propSize = sizeof(UInt32);
AudioUnitGetProperty(audioUnit, 
                     kAudioDevicePropertyBufferFrameSize, 
                     kAudioUnitScope_Global, 
                     0, 
                     &inNumberFrames, 
                     &propSize);



回答3:


This number would equal inNumberFrames, which somehow depends on the device sampling rate (and what else?)

It depends on what you attempt to set it to. You can set it.

// attempt to set duration
NSTimeInterval _preferredDuration = ...
NSError* err;
[[AVAudioSession sharedInstance]setPreferredIOBufferDuration:_preferredDuration error:&err];


// now get the actual duration it uses

NSTimeInterval _actualBufferDuration;
_actualBufferDuration = [[AVAudioSession sharedInstance] IOBufferDuration];

It would use a value roughly around the preferred value you set. The actual value used is a time interval based on a power of 2 and the current sample rate.

If you are looking for consistency across devices, choose a value around 10ms. The worst performing reasonable modern device is iOS iPod touch 16gb without the rear facing camera. However, this device can do around 10ms callbacks with no problem. On some devices, you "can" set the duration quite low and get very fast callbacks, but often times it will crackle up because the processing is not finished in the callback before the next callback happens.



来源:https://stackoverflow.com/questions/35875886/osx-coreaudio-getting-innumberframes-in-advance-on-initialization

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!