core-audio

split stereo audio to mono streams on iOS

僤鯓⒐⒋嵵緔 提交于 2019-12-24 02:43:28
问题 Apologies if this has been answered. I've seen lots of questions but no good answers. I'm trying to export stereo music from my iPod library to two mono caf files. How can I do this on iOS? I'm currently using Objective C. Thanks! Update: I've managed to get the sample code working from apple thats here: https://developer.apple.com/library/ios/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/05_Export.html My code will now import a media file and output as a valid caf file which

OSX CoreAudio: Getting inNumberFrames in advance - on initialization?

柔情痞子 提交于 2019-12-24 02:08:08
问题 I'm experimenting with writing a simplistic single-AU play-through based, (almost)-no-latency tracking phase vocoder prototype in C. It's a standalone program. I want to find how much processing load can a single render callback safely bear, so I prefer keeping off async DSP. My concept is to have only one pre-determined value which is window step , or hop size or decimation factor (different names for same term used in different literature sources). This number would equal inNumberFrames ,

Controlling volume of running applications in Mac OS X via Objective-C

左心房为你撑大大i 提交于 2019-12-24 00:38:06
问题 Please edvice by objective-c code snippets and useful links of how can I control all audio signals of output in OS X? I think it should be something like proxy layer somewhere in OS X logic layers. Thank you! 回答1: It's somewhat sad that there is no simple API to do this. Luckily it isn't too hard, just verbose. First, get the system output device: UInt32 size; AudioDeviceID outputDevice; OSStatus result = AudioHardwareGetProperty(kAudioHardwarePropertyDefaultOuputDevice, &size, &outputDevice)

Drawing Waveform with AVAssetReader and with ARC

落花浮王杯 提交于 2019-12-24 00:17:18
问题 I'm trying to apply Unsynchronized's answer (Drawing waveform with AVAssetReader) while using ARC. There were only a few modifications required, mostly release statements. Many thanks for a great answer! I'm using Xcode 4.2 targeting iOS5 device. But I'm getting stuck on one statement at the end while trying to invoke the whole thing. Method shown here: -(void) importMediaItem { MPMediaItem* item = [self mediaItem]; waveFormImage = [[UIImage alloc ] initWithMPMediaItem:item completionBlock:^

Core Audio: Working with Speaker, Is it possible to route to the internal speaker- AVAudioSessionPortBuiltInReceiver(Not to loud speaker)

我的梦境 提交于 2019-12-23 17:27:55
问题 As per the docs, Ther is no documentation about routing or even getting of the Port details for the "AVAudioSessionPortBuiltInReceiver". (Note: Please read again, its not about this port AVAudioSessionPortBuiltInSpeaker ) . As I found that and only possible overrideOutputAudioPort can be done only for public enum AVAudioSessionPortOverride : UInt { case None case Speaker } The Question is, Is ther any possibilities to play an audio through the : public let AVAudioSessionPortBuiltInReceiver:

How Do I Get Reliable Timing for my Audio App?

荒凉一梦 提交于 2019-12-23 16:06:39
问题 I have an audio app in which all of the sound generating work is accomplished by pure data (using libpd). I've coded a special sequencer in swift which controls the start/stop playback of multiple sequences, played by the synth engines in pure data. Until now, I've completely avoided using Core Audio or AVFoundation for any aspect of my app, because I know nothing about them, and they both seem to require C or Objective C coding, which I know nearly nothing about. However, I've been told from

iOS - AudioOutputUnitStop results in app freeze and warning

南楼画角 提交于 2019-12-23 10:19:42
问题 sometimes the execution of AudioOutputUnitStop(inputUnit) results in the app to be freezed for about 10/15 seconds and the following console message: WARNING: [0x3b58918c] AURemoteIO.cpp:1225: Stop: AURemoteIO::Stop: error 0x10004003 calling TerminateOwnIOThread Such code is handled by the Novocaine library and in particular it occurs in the [Novocaine pause] method which I invoke to stop the execution of the playback of an audio file (https://github.com/alexbw/novocaine/blob/master/Novocaine

How do I achieve very accurate timing in Swift?

大城市里の小女人 提交于 2019-12-23 07:59:52
问题 I am working on a musical app with an arpeggio/sequencing feature that requires great timing accuracy. Currently, using `Timer' I have achieved an accuracy with an average jitter of ~5ms, but a max jitter of ~11ms, which is unacceptable for fast arpeggios of 8th, 16th notes & 32nd notes especially. I've read the 'CADisplayLink' is more accurate than 'Timer', but since it is limited to 1/60th of a second for it's accuracy (~16-17ms), it seems like it would be a less accurate approach than what

How can I implement Equalizer in my iPhone application? [closed]

五迷三道 提交于 2019-12-23 04:45:55
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 5 years ago . I am Developing the iPad app which has Equalizer functionality,which means sound play on three properties( Low, High, Medium). I googled it and found this link: iPhoneMixerEQGraphTest Its basically mixes the sound but I want to apply equalizer effects on my sound. Please help. 回答1

No audio in video recording (using GPUImage) after initializing The Amazing Audio Engine

本小妞迷上赌 提交于 2019-12-23 03:32:17
问题 I'm using two third party tools in my project. One is "The Amazing Audio Engine". I use this for audio filters. The other is GPUImage, or more specifically, GPUImageMovieWriter. When I record videos, I merge an audio recording with the video. This works fine. However, sometimes I do not use The Amazing Audio Engine and just record a normal video using GPUImageMovieWriter. The problem is, even just after initializing The Amazing Audio Engine, the video has only a fraction of a second of audio