avaudioengine

save the audio file in the background

做~自己de王妃 提交于 2019-12-01 23:01:59
I have an app that changes the pitch of the audio when the pitch button is selected, now i am using installTapOnBus to save the file but this method gets invocated after i press the pitch button therefore only part of the audio is saved, i want to save the whole audio no matter when the pitch button is selected, is there any way This method is used to play the audio -(void)playAudio { NSError *err = nil; audioEngine = [[AVAudioEngine alloc] init]; AudioFileplayer = [[AVAudioPlayerNode alloc] init]; pitch = [[AVAudioUnitTimePitch alloc] init]; reverb = [[AVAudioUnitReverb alloc] init];

AVAudioSession's PlayAndRecord category and AVAudioSessionModeMeasurement are incompatible with defaultToSpeaker option?

独自空忆成欢 提交于 2019-12-01 18:16:09
问题 Attempting to put AVAudioSession into the .playAndRecord category with the AVAudioSessionModeMeasurement mode causes the .defaultToSpeaker option to be ignored, resulting in output being played quietly out the earpiece (also known as the receiver). 回答1: While there doesn't seem to be much written about this the documentation makes this "end result of audio output being sent to the receiver rather than speaker" seem like possible intended behavior and not a bug. let

Stream data from network in AVAudioEngine, is it possible?

怎甘沉沦 提交于 2019-11-30 19:26:24
I have an app in which I use AVAudioEngine for playing files from the local file system by using AVAudioPlayerNodes and AVAudioFiles . This works perfectly fine. Now I would like to enable my setup to also support streaming of MP3 files from a server on the internet. What I've tried so far My hope was that I could create some sort of buffers from NSURL objects pointing to network addresses, which I could then use with my AVAudioPlayerNode . I've searched Stack Overflow and the internet in general but haven't found any good tips on how to achieve this. I know that the AVAudioEngine lineup

AVAudioEngine inputNode installTap crash when restarting recording

一个人想着一个人 提交于 2019-11-30 17:56:30
I am implementing Speech Recognition in my app. When I first present the view controller with the speech recognition logic, everything works fine. However, when I try present the view controller again, I get the following crash: ERROR: [0x190bf000] >avae> AVAudioNode.mm:568: CreateRecordingTap: required condition is false: IsFormatSampleRateAndChannelCountValid(format) *** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: IsFormatSampleRateAndChannelCountValid(format)' Here is the code used for starting and stopping recording:

How can I specify the format of AVAudioEngine Mic-Input?

橙三吉。 提交于 2019-11-30 13:10:43
I'd like to record the some audio using AVAudioEngine and the users Microphone. I already have a working sample, but just can't figure out how to specify the format of the output that I want... My requirement would be that I need the AVAudioPCMBuffer as I speak which it currently does... Would I need to add a seperate node that does some transcoding? I can't find much documentation/samples on that problem... And I am also a noob when it comes to Audio-Stuff. I know that I want NSData containing PCM-16bit with a max sample-rate of 16000 (8000 would be better) Here's my working sample: private

Spectrogram from AVAudioPCMBuffer using Accelerate framework in Swift

ε祈祈猫儿з 提交于 2019-11-30 05:30:44
I'm trying to generate a spectrogram from an AVAudioPCMBuffer in Swift. I install a tap on an AVAudioMixerNode and receive a callback with the audio buffer. I'd like to convert the signal in the buffer to a [Float:Float] dictionary where the key represents the frequency and the value represents the magnitude of the audio on the corresponding frequency. I tried using Apple's Accelerate framework but the results I get seem dubious. I'm sure it's just in the way I'm converting the signal. I looked at this blog post amongst other things for a reference. Here is what I have: self.audioEngine

How can I specify the format of AVAudioEngine Mic-Input?

旧街凉风 提交于 2019-11-29 17:44:06
问题 I'd like to record the some audio using AVAudioEngine and the users Microphone. I already have a working sample, but just can't figure out how to specify the format of the output that I want... My requirement would be that I need the AVAudioPCMBuffer as I speak which it currently does... Would I need to add a seperate node that does some transcoding? I can't find much documentation/samples on that problem... And I am also a noob when it comes to Audio-Stuff. I know that I want NSData

Trying to stream audio from microphone to another phone via multipeer connectivity

假装没事ソ 提交于 2019-11-29 07:01:48
I am trying to stream audio from the microphone to another iPhone via Apples Multipeer Connectivity framework. To do the audio capturing and playback I am using AVAudioEngine (much thanks to Rhythmic Fistman' s answer here ). I receive data from the microphone by installing a tap on the input, from this I am getting a AVAudioPCMBuffer which I then convert to an array of UInt8 which I then stream to the other phone. But when I am converting the array back to an AVAudioPCMBuffer I get an EXC_BAD_ACCESS exception with the compiler pointing to the method where I am converting the byte array to

Using sound effects with AudioEngine

99封情书 提交于 2019-11-28 16:24:52
Background - I saw a video titled "AVAudioEngine in Practice" from the following list of videos published at Apple's recent WWDC to apply sound effects to an audio. https://developer.apple.com/videos/wwdc/2014/ After that, I was successfully able to change the pitch of an audio with the following code: //Audio Engine is initialized in viewDidLoad() audioEngine = AVAudioEngine() //The following Action is called on clicking a button @IBAction func chipmunkPlayback(sender: UIButton) { var pitchPlayer = AVAudioPlayerNode() var timePitch = AVAudioUnitTimePitch() timePitch.pitch = 1000 audioEngine

Can I use AVAudioEngine to read from a file, process with an audio unit and write to a file, faster than real-time?

倖福魔咒の 提交于 2019-11-28 06:34:08
I am working on an iOS app that uses AVAudioEngine for various things, including recording audio to a file, applying effects to that audio using audio units, and playing back the audio with the effect applied. I use a tap to also write the output to a file. When this is done it writes to the file in real time as the audio is playing back. Is it possible to set up an AVAudioEngine graph that reads from a file, processes the sound with an audio unit, and outputs to a file, but faster than real time (ie., as fast as the hardware can process it)? The use case for this would be to output a few