core-audio

AudioUnit tone generator is giving me a chirp at the end of each tone generated

这一生的挚爱 提交于 2019-12-20 15:30:17
问题 I'm creating a old school music emulator for the old GWBasic PLAY command. To that end I have a tone generator and a music player. Between each of the notes played I'm getting a chirp sound that mucking things up. Below are both of my classes: ToneGen.h #import <Foundation/Foundation.h> @interface ToneGen : NSObject @property (nonatomic) id delegate; @property (nonatomic) double frequency; @property (nonatomic) double sampleRate; @property (nonatomic) double theta; - (void)play:(float)ms; -

How do I register for a notification for then the sound volume changes?

跟風遠走 提交于 2019-12-20 14:43:47
问题 I need my app to be notified when the OS X sound volume has changed. This is for a Desktop app, not for iOS. How can I register for this notification? 回答1: This can be a tiny bit tricky because some audio devices support a master channel, but most don't so the volume will be a per-channel property. Depending on what you need to do you could observe only one channel and assume that all other channels the device supports have the same volume. Regardless of how many channels you want to watch,

How do you set the input level (gain) on the built-in input (OSX Core Audio / Audio Unit)?

我的未来我决定 提交于 2019-12-20 14:17:51
问题 I've got an OSX app that records audio data using an Audio Unit. The Audio Unit's input can be set to any available source with inputs, including the built-in input. The problem is, the audio that I get from the built-in input is often clipped, whereas in a program such as Audacity (or even Quicktime) I can turn down the input level and I don't get clipping. Multiplying the sample frames by a fraction, of course, doesn't work, because I get a lower volume, but the samples themselves are still

How do you set the input level (gain) on the built-in input (OSX Core Audio / Audio Unit)?

若如初见. 提交于 2019-12-20 14:16:09
问题 I've got an OSX app that records audio data using an Audio Unit. The Audio Unit's input can be set to any available source with inputs, including the built-in input. The problem is, the audio that I get from the built-in input is often clipped, whereas in a program such as Audacity (or even Quicktime) I can turn down the input level and I don't get clipping. Multiplying the sample frames by a fraction, of course, doesn't work, because I get a lower volume, but the samples themselves are still

iOS 7 SDK not abiding background audio

人走茶凉 提交于 2019-12-20 08:21:37
问题 I have done a lot of research, both on Google and StackOverflow. All the answers I found do not work in iOS 7. I started writing fresh app in iOS 7 SDK with Xcode 5. All I'm trying to do is play audio in the app from a file stored in the app bundle (not from the Music library). I want to have audio played in background and controlled when screen is locked (in addition to Control Center). I set the APPNAME-Info.plist key, UIBackgroundModes , to audio . It is not handling things in the app

Matt Gallagher's iOS Tone Generator

試著忘記壹切 提交于 2019-12-20 07:06:54
问题 Can someone point me to a working version of Matt Gallagher's Tone Generator? http://www.cocoawithlove.com/assets/objc-era/ToneGenerator.zip As Matt says, it hasn't been updated and apparently got broken by newer APIs. I updated what I could figure out needed updating and now it compiles and runs with only deprecation warnings but all it does is make clicking sounds when the "Play" and "Stop" button are touched. I've gone through the code and looked at the documentation in Xcode for the API

Is there a way to intercept audio output from within your app to display back an audio visualizer on iOS?

末鹿安然 提交于 2019-12-20 06:48:25
问题 We're currently using Linphone library to make VOIP calls and they have their own solution for audio playback. However, we would like to display a visualizer for the audio that Linphone is outputting from within our own app. Is there a way that we can intercept this data (maybe through sample buffering) in order to draw up audio waves/volume meter in the user interface? AVAudioPlayer or AVPlayer is out of the question since we do not have access to those objects. Is there a solution in place

AVAudioSequencer Causes Crash on Deinit/Segue: 'required condition is false: outputNode'

非 Y 不嫁゛ 提交于 2019-12-20 06:36:31
问题 The below code causes a crash with the following errors whenever the object is deinitialized (e.g. when performing an unwind segue back to another ViewController): required condition is false: [AVAudioEngineGraph.mm:4474:GetDefaultMusicDevice: (outputNode)] Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: outputNode' The AVAudioSequencer is the root of the issue, because the error ceases if this is removed. How can this crash be

Convert .caf to .mp3 on iPhone [closed]

瘦欲@ 提交于 2019-12-20 03:23:19
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 11 months ago . I realize this question has been asked before, but I haven't seen an answer that works for my needs. Basically I have full length songs in .caf format but I need to be able to upload/download them from a server. Is it viable to do compression (to something like .mp3 or .wav) on

Using ExtAudioFileWriteAsync() in callback function. Can't get to run

拈花ヽ惹草 提交于 2019-12-20 02:55:01
问题 Just can't seem to get very far in Core Audio. My goal is to write captured audio data from an instrument unit to a file. I have set up a call to a callback function on an instrument unit with this: CheckError(AudioUnitAddRenderNotify(player->instrumentUnit, MyRenderProc, &player), "AudioUnitAddRenderNotify Failed"); I set up the file and AudioStreamBasicDescription with this: #define FILENAME @"output_IV.aif" NSString *fileName = FILENAME; // [NSString stringWithFormat:FILENAME_FORMAT, hz];