core-audio

How to mute the mic's audio input and recognize only the internal audio of the device by using Auriotouch

醉酒当歌 提交于 2020-01-10 05:40:27
问题 I have used auriotouch codes in my app and when i record the audio, it shows the audio waves . So, while recording the sound, the mic recognizes the audio input and then the waves would act accordingly to whatever the sound the mic receives. So far its fine. But now, when I click on the play button to play the sound I just recorded, the mic's input should be off, so that the waves would act only according to the audio I recorded before and the waves should not act even if I speak while it

Recording sound as WAV on iphone

亡梦爱人 提交于 2020-01-09 13:07:25
问题 I am making an iPhone recording app that needs to submit the sound file as a .wav to an external server. Starting from the SpeakHere example, I am able to record sound as a file, but only as .caf Does anyone know how to record it as a wav instead? Or how to convert from .caf to .wav on the iphone? (The conversion must happen on the phone) EDIT: I'm wondering if anything can be done with using kAudioFileWAVEType instead of kAudioFileCAFType in AudioFileCreateWithURL 回答1: Yup, the key is

Recording sound as WAV on iphone

雨燕双飞 提交于 2020-01-09 13:07:13
问题 I am making an iPhone recording app that needs to submit the sound file as a .wav to an external server. Starting from the SpeakHere example, I am able to record sound as a file, but only as .caf Does anyone know how to record it as a wav instead? Or how to convert from .caf to .wav on the iphone? (The conversion must happen on the phone) EDIT: I'm wondering if anything can be done with using kAudioFileWAVEType instead of kAudioFileCAFType in AudioFileCreateWithURL 回答1: Yup, the key is

AVAudioEngine downsample issue

风流意气都作罢 提交于 2020-01-09 11:45:14
问题 I'm having an issue with downsampling audio taken from the microphone. I'm using AVAudioEngine to take samples from the microphone with the following code: assert(self.engine.inputNode != nil) let input = self.engine.inputNode! let audioFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 8000, channels: 1, interleaved: false) let mixer = AVAudioMixerNode() engine.attach(mixer) engine.connect(input, to: mixer, format: input.inputFormat(forBus: 0)) do { try engine.start() mixer

AVAudioEngine downsample issue

旧街凉风 提交于 2020-01-09 11:45:09
问题 I'm having an issue with downsampling audio taken from the microphone. I'm using AVAudioEngine to take samples from the microphone with the following code: assert(self.engine.inputNode != nil) let input = self.engine.inputNode! let audioFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 8000, channels: 1, interleaved: false) let mixer = AVAudioMixerNode() engine.attach(mixer) engine.connect(input, to: mixer, format: input.inputFormat(forBus: 0)) do { try engine.start() mixer

How do I use CoreAudio's AudioConverter to encode AAC in real-time?

随声附和 提交于 2020-01-09 09:17:26
问题 All the sample code I can find that uses AudioConverterRef focuses on use cases where I have all the data up-front (such as converting a file on disk). They commonly call AudioConverterFillComplexBuffer with the PCM to be converted as the inInputDataProcUserData and just fill it in in the callback. (Is that really how it's supposed to be used? Why does it need a callback, then?) For my use case, I'm trying to stream aac audio from the microphone, so I have no file, and my PCM buffer is being

Is it possible to load a compressed audio file directly into a buffer, without conversion?

跟風遠走 提交于 2020-01-07 07:20:07
问题 I am developing an iOS app that must handle several stereo audio files (ranging from a few seconds to four minutes in duration) at once, playing up to three back simultaneously, synched through a Multi-Channel-Mixer-based AUGraph. My audio files are compressed – either as MP3, AAC or CAF – but when they are loaded into buffers are converted into the 32-bit AudioUnitSampleType format (my code is based on Apple's iPhoneMultichannelMixerTest). Needless to say, with such large buffers, app memory

memory is growing in audio buffer code

徘徊边缘 提交于 2020-01-06 05:44:09
问题 I have a code that we use many times with our apps, its a class that take the buffer samples and process it ,then send back notification to the main class. The code is c and objective-c. It works just great, but there is a memory growing which i can see in instruments-allocations tool. the "overall bytes" is keep growing, in 100k a second. becuase of some parts of the code that i know who they are . this is the callback function, with the line that makes problems. it happens many times a

Audio producer threads with OSX AudioComponent consumer thread and callback in C

眉间皱痕 提交于 2020-01-06 03:08:31
问题 This question is not about a plugin, it's about a standalone application program design and is connected with few questions I've asked before. I have to write a multi-threaded audio synthesizing function, whose amount of data crunching by far exceeds what can get accomodated on the CoreAudio render thread: several thousands of independent amplitude and phase interpolating sample-accurate sine-wave oscillators in real time. This requires more CPU power than any single processor core can bear,

Using CMSampleTimingInfo, CMSampleBuffer and AudioBufferList from raw PCM 16000 sample rate stream

会有一股神秘感。 提交于 2020-01-05 04:08:23
问题 I recevie audio data and size from outside, the audio appears to be linear PCM, signed int16, but when recording this using an AssetWriter it saves to the audio file highly distorted and higher pitch. #define kSamplingRate 16000 #define kNumberChannels 1 UInt32 framesAlreadyWritten = 0; -(AudioStreamBasicDescription) getAudioFormat { AudioStreamBasicDescription format; format.mSampleRate = kSamplingRate; format.mFormatID = kAudioFormatLinearPCM; format.mFormatFlags =