audiounit

audio unit fails to run in Logic; .exp _Entry undefined in linker

∥☆過路亽.° 提交于 2019-12-25 17:47:08
问题 Background I am trying to get Apple's example TremoloUnit to run in Logic 9. From various forums and this SO answer, the problem with Apple's examples seems to be that Logic 9 (and many other AU hosts) use old Carbon resources. According to this technical note, adding an appropriate .r file should provide the needed backwards compatability, so I added a .r that matches the sample .plist . The Problem If I include the line _TremoloUnitEntry in my .exp , the linker throws this error: Undefined

Concatenating Audio Buffers in ObjectiveC

↘锁芯ラ 提交于 2019-12-25 07:04:46
问题 First of all I am new bee on c and objective c I try to fft a buffer of audio and plot the graph of it. I use audio unit callback to get audio buffer. the callback brings 512 frames but after 471 frames it brings 0. (I dont know this is normal or not. It used to bring 471 frames with full of numbers. but now somehow 512 frames with 0 after 471. Please let me know if this is normal) Anyway. I can get the buffer from the callback, apply fft and draw it . this works perfect. and here is the

OSX AudioUnit SMP

﹥>﹥吖頭↗ 提交于 2019-12-25 02:39:10
问题 I'd like to know if someone has experience in writing a HAL AudioUnit rendering callback taking benefits of multi-core processors and/or symmetric multiprocessing? My scenario is the following: A single audio component of sub-type kAudioUnitSubType_HALOutput (together with its rendering callback) takes care of additively synthesizing n sinusoid partials with independent individually varying and live-updated amplitude and phase values. In itself it is a rather straightforward brute-force

my iOS app using audio units with an 8000 hertz sample rate returns a distorted voice

大兔子大兔子 提交于 2019-12-24 23:13:54
问题 I really need help with this issue. I'm developing an iOS application with audio units, the recorded audio needs to at 8bit / 8000 hertz sample rate using alaw format. How ever I'm getting a distorted voice coming out the speaker. I came across this sample online: http://www.stefanpopp.de/2011/capture-iphone-microphone/comment-page-1/ while trying to debug my app I used my audioFormat in his application and I am getting the same distorted sound. I guessing I either have incorrect settings or

Starting an AVAssetReader stopping callbacks to a Remote I/O AudioUnit

廉价感情. 提交于 2019-12-24 10:29:39
问题 I am using an AVAssetReader to read the audio out of the iPod Library. (see here) I then take the read buffer and play it through an AudioUnit. I am trying to refractor the code to stream in the Audio as I play it out. However if an AVAssetReader is running the AudioUnit stops receiving calls to its kAudioUnitProperty_SetRenderCallback. I have simplified my code to only play a file within the AudioUnits callback... OSStatus UnitRenderCB(void* pRefCon, AudioUnitRenderActionFlags* flags, const

OSX CoreAudio: Getting inNumberFrames in advance - on initialization?

柔情痞子 提交于 2019-12-24 02:08:08
问题 I'm experimenting with writing a simplistic single-AU play-through based, (almost)-no-latency tracking phase vocoder prototype in C. It's a standalone program. I want to find how much processing load can a single render callback safely bear, so I prefer keeping off async DSP. My concept is to have only one pre-determined value which is window step , or hop size or decimation factor (different names for same term used in different literature sources). This number would equal inNumberFrames ,

iOS - AudioOutputUnitStop results in app freeze and warning

南楼画角 提交于 2019-12-23 10:19:42
问题 sometimes the execution of AudioOutputUnitStop(inputUnit) results in the app to be freezed for about 10/15 seconds and the following console message: WARNING: [0x3b58918c] AURemoteIO.cpp:1225: Stop: AURemoteIO::Stop: error 0x10004003 calling TerminateOwnIOThread Such code is handled by the Novocaine library and in particular it occurs in the [Novocaine pause] method which I invoke to stop the execution of the playback of an audio file (https://github.com/alexbw/novocaine/blob/master/Novocaine

AudioUnit callback and synchronization: how to ensure thread safety with GCD

强颜欢笑 提交于 2019-12-23 04:33:24
问题 I am building an audio app based on the AudioUnit callback facility and a graph of audio processing nodes. I know that the callback is executed in a separate (high priority?) thread and therefore all interaction with my processing nodes, such as e.g. changing EQ parameters while playing, should be done in a thread safe manner. In other words, the nodes should be protected from modification while the audio callback chain is being executed. The way I understand it in terms of more low-level

how to properly fill a stereo AudioBuffer

核能气质少年 提交于 2019-12-23 01:46:05
问题 so i'm using Apple's MixerHost sample code to do a basic audiograph setup for stereo synthesis. I have some trouble figuring out how i have to fill the buffer slice. Specifically, i get audio out only in the left channel, the right channel is silent: AudioUnitSampleType *buffer = (AudioUnitSampleType *)ioData->mBuffers[0].mData; SInt16 sampleValue; for(UInt32 i = 0; i < inNumberFrames; i++) { sampleValue = sinf(inc) * 32767.0f; // generate sine signal inc += .08; buffer[i] = sampleValue; } if

How would you connect an iPod library asset to an Audio Queue Service and process with an Audio Unit?

寵の児 提交于 2019-12-21 17:37:11
问题 I need to process audio that comes from the iPod library. The only way to read an asset for the iPod library is AVAssetReader. To process audio with an Audio Unit it needs to be in stereo format so I have values for the left and right channels. But when I use AVAssetReader to read an asset from the iPod library it does not allow me to get it out in stereo format. It comes out in interleaved format which I do not know how to break into left and right audio channels. To get to where I need to