core-audio

How to access multiple buffers in UnsafePointer<AudioBufferList> (non-mutable)

£可爱£侵袭症+ 提交于 2020-03-25 18:23:54
问题 I've been trying to use the new AVAudioSinkNode in Core Audio. It passes audioBufferList which is of type UnsafePointer<AudioBufferList> to the closure handling the audio. The implementation of AudioBufferList in Swift is such that you cannot access multiple buffers, if there are more than one. Apple provided a wrapper for the mutable version, UnsafeMutablePointer<AudioBufferList> , which is called UnsafeMutableAudioBufferListPointer and allows access to multiple buffers. But I cannot find a

Change Sample rate with AudioConverter

南楼画角 提交于 2020-03-21 06:55:51
问题 I am trying to re-sample the input audio 44.1 kHz to 48 kHz. using AudioToolbox's AUAudioUnit.inputHandler writing out the input 44.1 kHZ to a wav file (this is working perfectly) converting the 44.1 kHz to 48 kHz and writing out this converted bytes to file. https://developer.apple.com/documentation/audiotoolbox/1503098-audioconverterfillcomplexbuffer The problem is in the 3rd step. After writing out to a file the voice is very noisy. here is my code: // convert to 48kHz var

Control mono playback output with Core Audio

自闭症网瘾萝莉.ら 提交于 2020-03-03 06:18:12
问题 I'm developing an application for iOS, that uses the RemoteIO audio unit to record audio from the microphone, process it and output to the speakers (headset). Currently I use a single channel (mono) for input and output. What I'd like to do, is to allow the users to choose an output speaker: left-only, right-only or both. My current code supports only the "both" setting - the same sound is coming from both speakers. Here's how I set the stream format (kAudioUnitProperty_StreamFormat) of the

Using CFArrayGetValueAtIndex in Swift with UnsafePointer (AUPreset)

拈花ヽ惹草 提交于 2020-01-24 13:50:14
问题 My problem is simple, but tricky. I want to write this line AUPreset *aPreset = (AUPreset*)CFArrayGetValueAtIndex(mEQPresetsArray, indexPath.row); in Swift. The trick is that the return value is UnsafePointer<Void> . 回答1: Have you tried this?: let aPreset = UnsafePointer<AUPreset>(CFArrayGetValueAtIndex(mEQPresetsArray, indexPath.row)) 回答2: Here's the Swift 4 version let aPreset = unsafeBitCast(CFArrayGetValueAtIndex(mEQPresetsArray, indexPath), to: AUPreset.self) 来源: https://stackoverflow

Clipping sound with opus on Android, sent from IOS

泪湿孤枕 提交于 2020-01-23 12:09:43
问题 I am recording audio in IOS from audioUnit, encoding the bytes with opus and sending it via UDP to android side. The problem is that the sound is playing a bit clipped . I have also tested the sound by sending the Raw data from IOS to Android and it plays perfect. My AudioSession code is try audioSession.setCategory(.playAndRecord, mode: .voiceChat, options: [.defaultToSpeaker]) try audioSession.setPreferredIOBufferDuration(0.02) try audioSession.setActive(true) My recording callBack code is:

Starting with the Core Audio framework

别说谁变了你拦得住时间么 提交于 2020-01-22 15:09:32
问题 For a project that I intend to start on soon, I will need to play back compressed and uncompressed audio files. To do that, I intend to use the Core Audio framework. However, I have no prior experience in audio programming, and I'm really not sure where to start. Are there any beginner level resources or sample projects that can demonstrate how to build a simple audio player using Core Audio? 回答1: A preview of a book on Core Audio just came out. I've started reading it and as a beginner

How to render system input from the remote I/O audio unit and to play these sample in stereo

旧时模样 提交于 2020-01-17 05:08:09
问题 I am implementing a play through program from a (mono) microphone to a stereo output. For the output I configured a AudioStreamBasicDescription with two channels and set this ASBD to the input scope of the remote I/O unit. However, when I configure the render callback to draw the system input no audio is played. On the other hand, when the ASBD is set to a single channel, audio is played without problems. The audio unit render is implemented by: AudioUnitRender(_rioUnit, ioActionFlags,

Stopping and Quickly Replaying an AudioQueue

匆匆过客 提交于 2020-01-13 16:26:50
问题 I've got an audio queue that I've got playing, stopping, pausing correctly but I'm finding the AudioQueueStop() function to be taking a long time to execute. I'd like to immediately stop and then restart playing an audio queue and was wondering what the quickest way to do so would be. In my project I have multiple audio queues that I keep around to play specific sounds over and over. There is a situation where I must stop some of those sounds and then immediately play them and many more at

Unable to get correct frequency value on iphone

元气小坏坏 提交于 2020-01-13 07:15:48
问题 I'm trying to analyze frequency detection algorithms on iOS platform. So I found several implementations using FFT and CoreAudio (example 1 and example 2). But in both cases there is some imprecision in frequency exists: (1) For A4 (440Hz) shows 441.430664 Hz. (1) For C6 (1046.5 Hz) shows 1518.09082 Hz. (2) For A4 (440Hz) shows 440.72 Hz. (2) For C6 (1046.5 Hz) shows 1042.396606 Hz. Why this happens and how to avoid this problem and detect frequency in more accurate way? 回答1: Resolution in

What's the reason of using Circular Buffer in iOS Audio Calling APP?

混江龙づ霸主 提交于 2020-01-11 03:30:06
问题 My question is pretty much self explanatory. Sorry if it seems too dumb. I am writing a iOS VoIP dialer and have checked some open-source code(iOS audio calling app). And almost all of those use Circular Buffer for storing recorded and received PCM audio data. SO i am wondering why we need to use a Circular Buffer in this case. What's the exact reason for using such audio buffer. Thanks in advance. 回答1: Good question. There is another good reason for using Circular Buffer. In iOS, if you use