core-audio

Is there a way to record an audio stream using Matt Gallagher's audio streamer?

半城伤御伤魂 提交于 2019-12-23 02:17:15
问题 I use Matt Gallagher's audio streamer for streaming radio stations. But how to record the audio? Is there a way to get the downloaded packets into NSData and save it in an audio file in the documents folder on the iPhone? Thanks 回答1: Yes, there is and I have done it. My problem is being able to play it back IN the same streamer (asked elsewhere). It will play back with the standard AVAudioPlayer in iOS. However, this will save the data to a file by writing it out in the streamer code. This

Example of saving audio from RemoteIO?

与世无争的帅哥 提交于 2019-12-22 19:26:09
问题 I've searched around but haven't found any good examples or tutorials of saving audio out of a RemoteIO Audio Unit. My setup: Using the MusicPlayer API, I have several AUSamplers -> MixerUnit -> RemoteIO Audio playback works great. I would like to add functionality to save the audio output to a file. Would I do this in a render callback on the RemoteIO? Any tips or pointers to example code much appreciated! 回答1: Due to the tight latency requirements of Audio Unit callbacks, one should not to

How to play audio backwards?

帅比萌擦擦* 提交于 2019-12-22 11:15:24
问题 Some people suggested to read the audio data from end to start and create a copy written from start to end, and then simply play that reversed audio data. Are there existing examples for iOS how this is done? I found an example project called MixerHost, which at some point uses an AudioUnitSampleType holding the audio data that has been read from file, and assigning it to a buffer. This is defined as: typedef SInt32 AudioUnitSampleType; #define kAudioUnitSampleFractionBits 24 And according to

Convert audio linear pcm to mp3 ( using LAME ) with the help of AudioQueueServices example in iOS

徘徊边缘 提交于 2019-12-22 11:06:00
问题 I am new in ios developement.I am encoding a LinearPCM to MP3 in iOS.I'm trying to encode the raw PCM data from microphone to MP3 using AudioToolbox framework and Lame.And although everything seems to run fine if i record .caf format . i am getting only noise and distortions present in the encoded stream. I'm not sure that I setup AudioQueue correctly and also that I process the encoded buffer in the right wat... My code to setup audio recording: sample project https://github.com/vecter/Audio

Short circuiting of audio in VOIP app with CallKit

荒凉一梦 提交于 2019-12-22 10:53:08
问题 I'm using the SpeakerBox app as a basis for my VOIP app. I have managed to get everything working, but I can't seem to get rid of the "short-circuiting" of the audio from the mic to the speaker of the device. In other words, when I make a call, I can hear myself in the speaker as well as the other person's voice. How can I change this? AVAudioSession setup: AVAudioSession *sessionInstance = [AVAudioSession sharedInstance]; NSError *error = nil; [sessionInstance setCategory

SpeakHere sample can't play sound which loads from internet

被刻印的时光 ゝ 提交于 2019-12-22 10:29:14
问题 I'd like to play sound file which loaded from internet, so I tried to start from iPhone SDK SpeakHere sample. I recorded the sound, then saved and uploaded to the internet, I could download that file and play without problem from sound tools. But when I tried to play that URL from SpeakHere, I am getting error Program received signal: “EXC_BAD_ACCESS” . After trace around, I found that the in -[AudioPlayer calculateSizesFor:] , it set bufferByteSize to a huge number 806128768, which caused

Cannot Control Volume of AVAudioPlayer via Hardware Buttons when AudioSessionActive is NO

五迷三道 提交于 2019-12-22 09:33:49
问题 I'm building a turn-by-turn navigation app that plays periodic, short clips of sound. Sound should play regardless of whether the screen is locked, should mix with other music playing, and should make other music duck when this audio plays. Apple discusses the turn-by-turn use case in detail in the "WWDC 2010 session 412 Audio Development for iPhone OS part 1" video at minute 29:20. The implementation works great, but there is one problem - when the app is running, pressing the hardware

Two-channel recording on the iPhone/iPad: headset + built-in mic

末鹿安然 提交于 2019-12-22 08:08:40
问题 For an app, we have a requirement to record from two different audio sources. One mic is a special (throat) mic and it comes with the same connector that the iPhone headset with mic uses. On a second channel, we would like to record the ambient sounds and the best thing would be if we could just record from the iPhone's/iPad's built-in mic at the same time as we record from the throat mic headset. Is there any way this is possible? Any other tips? 回答1: The OS currently only allows an app to

Setting a time limit when recording an audio clip?

懵懂的女人 提交于 2019-12-22 07:48:29
问题 I looked for search terms along the lines of the post title, but alas.. I am building an iPhone app using AVFoundation. Is there a correct procedure to limit the amount of audio that will be recorded? I would like a maximum of 10 seconds. Thanks for any help/advice/tips/pointers.. 回答1: AVAudioRecorder has the following method: - (BOOL)recordForDuration:(NSTimeInterval)duration I think that will do the trick! http://developer.apple.com/library/ios/#documentation/AVFoundation/Reference

Swift vs Objective C pointer manipulation issue

北战南征 提交于 2019-12-22 06:49:13
问题 I have this code in Objective C which works fine: list = controller->audioBufferList; list->mBuffers[0].mDataByteSize = inNumberFrames*kSampleWordSize; list->mBuffers[1].mDataByteSize = inNumberFrames*kSampleWordSize; And it works fantastic, it updates mDataByteSize field of mBuffers[0] & mBuffers[1]. I tried translating the same in Swift but it doesn't work: public var audioBufferList:UnsafeMutableAudioBufferListPointer In function, let listPtr = controller.audioBufferList