audioqueue

How to provide audio buffer to AudioQueue to play audio?

China☆狼群 提交于 2019-12-22 17:38:35
问题 In my application, I am receiving audio data in LinearPCM format, which I need to play. I am following iOS SpeakHere example. However I cannot get how and where I should provide a buffer to AudioQueue . Can anyone provide me a working example of playing audio buffer in iOS via AudioQueue ? 回答1: In the SpeakHere example playback is achieved using AudioQueue . In the set up of AudioQueue , a function is specified that will be called when the queue wants more data. You can see that in this

Audio Queue is playing too fast when the buffer size is small

怎甘沉沦 提交于 2019-12-22 05:36:20
问题 I am able to stream and play m4a files using Audio File Services + Audio Queue Services. Bitrate information of the file is not available to the Audio Queue because of file type. After downloading the all of the audio packets I feed them to the player. When I choose a buffer size around 32768 or 16384 since callbacks are called less often and and each buffer size is big, it seems it is playing almost at regular speed. Problem is sometimes I have to play small files as well but when I choose a

Recording with AudioQueue and Monotouch static sound

不打扰是莪最后的温柔 提交于 2019-12-19 21:49:23
问题 I have written a small program in MonoTouch to record sound from the mic of my iPhone 4s using an InputAudioQueue. I save the recorded data in an array and feed this buffer to the my audio player for playback (using OutputAudioQueue). When playing back it's just some stuttering garbage / static sound. I have tried filling the buffer with sin waves before playback and then it sounds good, so I guess the problem is in the recording, not the playback. Can anyone help me see what is wrong? (Code

How do I stream AVAsset audio wirelessly form one iOS device to another?

痞子三分冷 提交于 2019-12-19 04:46:42
问题 I'm making something like streaming the audio from iPod library, send the data via network or bluetooth, and playback using audio queue. Thanks for this question and code. Help me a lot. I have two question about it. what should I send from one device to another? CMSampleBufferRef? AudioBuffer? mData? AudioQueueBuffer? packet? I have no idea. When the app finished the playing, it crashed, and I got error (-12733). I just want to know how to handle the errors instead of letting it crash.

iOS: Pulsing red double-height status bar

点点圈 提交于 2019-12-18 09:53:46
问题 I am developing a recording app, and I'd like to show a pulsing red double-height status bar on top of my app while the app is recording, and while the user is still in the app - just like Voice Memos. How do I do that? I can get it so that the double-height red status bar appears when I am outside the app, but not while inside the app. Any hints? Is it actually possible at all? 回答1: to answer the last question first: it is possible, though i don't know whether the mechanism for causing to

AudioQueue volume too low

北城余情 提交于 2019-12-13 10:23:09
问题 I am having a problem when using AudioQueue to play PCM data. The volume is low when using the iPhone's speaker; I have even turned the system volume up to maximum. However, the volume is fine when I am using the earphones. I inserted the data into the queue like this: memcpy(mBuffers[mIndex]->mAudioData, pcmData, mBufferByteSize); mBuffers[mIndex]->mAudioDataByteSize = mBufferByteSize; mBuffers[mIndex]->mPacketDescriptionCount = mBufferByteSize/2; OSStatus status = AudioQueueEnqueueBuffer

Get microphone input using Audio Queue in Swift 3

流过昼夜 提交于 2019-12-12 03:27:48
问题 I am developing an app that records voice via built-in microphone and sends it to a server live. So I need to get the byte stream from the microphone while recording. After googling and stack-overflowing for quite a while, I think I figured out how it should work, but it does not. I think using Audio Queues might be the way to go. Here is what I tried so far: func test() { func callback(_ a :UnsafeMutableRawPointer?, _ b : AudioQueueRef, _ c :AudioQueueBufferRef, _ d :UnsafePointer

Recording audio output only from speaker of iphone excluding microphone

ⅰ亾dé卋堺 提交于 2019-12-12 00:03:06
问题 I am trying to record the sound from iPhone speaker. I am able to do that, but I am unable to avoid mic input in the recorded output. Tried with sample code available in different websites with no luck. The sample which I used does the recording with audio units. I need to know if there is any property for audio unit to set the mic input volume to zero. Above that I came to from other posts that Audio Queue services can do the thing for me. Can any one redirect me with sample code for the

Simple AudioQueue sine wave—why the distortion?

☆樱花仙子☆ 提交于 2019-12-11 12:33:50
问题 As a learning exercise, I'm using an AudioQueue to generate and play a 300 Hz sine wave. (I understand there are a variety of tools to generate and play audio, but yes, this is just to build up my Core Audio chops and this task is all about the AudioQueue.) The wave plays, but with distortion. Recording and plotting the sound shows that there is some distortion at the boundary between buffers (every half second), in addition to other short bursts of distortion here and there. I've included my

RoboVM implementation of recording demo using AudioQueue results in “No @Marshaler found” error

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-11 08:25:25
问题 I'm trying to implement iOS audio recording using RoboVM using the Apple's AudioQueue guide and their sample SpeakHere project and am running into this error: No @Marshaler found for parameter 1 of @Callback method <AQRecorder: void HandleInputBuffer(AQRecorder,org.robovm.apple.audiotoolbox.AudioQueue,org.robovm.apple.audiotoolbox.AudioQueueBuffer,org.robovm.apple.coreaudio.AudioTimeStamp,int,org.robovm.apple.coreaudio.AudioStreamPacketDescription)> Any ideas? Here's the code I'm using: Main