core-audio

Core Audio (iOS 5.1) Reverb2 properties do not exist, error code -10877

可紊 提交于 2019-12-22 05:31:49
问题 I am playing with Apple's sample project "LoadPresetDemo". I have added the reverb audio unit AudioUnit kAudioUnitSubType_Reverb2 to the graph, which is the only iOS reverb available. In the CoreAudio header file "AudioUnitParameters.h", it states that Reverb2 should respond to these parameters: enum { // Global, CrossFade, 0->100, 100 kReverb2Param_DryWetMix = 0, // Global, Decibels, -20->20, 0 kReverb2Param_Gain = 1, // Global, Secs, 0.0001->1.0, 0.008 kReverb2Param_MinDelayTime = 2, //

AVAudioPlayer resetting currently playing sound and playing it from beginning

一世执手 提交于 2019-12-22 01:42:29
问题 I'm having an issue using AVAudioPlayer where I want to reset a player if it's currently playing and have it play again. I try the following with no luck: The sound plays once but then the second time i select the button it stops the sound, the third time starts the sound up again. //Stop the player and restart it if (player.playing) { NSLog(@"Reset sound: %@", selectedSound); [player stop]; [player play]; } else { NSLog(@"playSound: %@", selectedSound); [player play]; } I've also tried using

iPhone SDK: play a sound with coreAudio

跟風遠走 提交于 2019-12-22 00:59:21
问题 So far i've been using AudioServices to play sounds in my drum app which caused horrible lag etc. I've been told that if i use coreAudio there will be no lag abd the performance will be better.The person also told me that AudioServices is only used to play short alert sound. Any idea where i could start with CoreAudio? If you have any code, it's helpful too :) but tutorials would be better :D. Thanks in advance! -DD 回答1: I really recommend Apple's documentation and the sample apps they

How to use kAULowShelfParam_CutoffFrequency parameter of kAudioUnitSubType_LowShelfFilter which controls bass in Core Audio?

空扰寡人 提交于 2019-12-21 23:46:35
问题 You must had gone through this before coming to my this question.How to use kAudioUnitSubType_LowShelfFilter of kAudioUnitType_Effect which controls bass in core Audio? Slowly & Steadily getting the things right for bass control of music. But yet not got succeeded in my objective. Now i got to know that i have to change the kAULowShelfParam_CutoffFrequency to change the bass . The following code i was using before 5 to 7 days. this code plays music properly but doesn't change bass properly.

Implementing a post-processed low-pass filter using core audio

风流意气都作罢 提交于 2019-12-21 21:45:23
问题 I have implemented a rudimentary low-pass filter using a time based value. This is ok, but trying to find the correct time slice is guess work, and gives different results based on different input audio files. Here is what I have now: - (void)processDataWithInBuffer:(const int16_t *)buffer outBuffer:(int16_t *)outBuffer sampleCount:(int)len { BOOL positive; for(int i = 0; i < len; i++) { positive = (buffer[i] >= 0); currentFilteredValueOfSampleAmplitude = LOWPASSFILTERTIMESLICE * (float)abs

iPhone Extended Audio File Services, mp3 -> PCM -> mp3

匆匆过客 提交于 2019-12-21 20:46:40
问题 I would like to use the Core Audio extended audio file services framework to read a mp3 file, process it as a PCM, then write the modified file back as a mp3 file. I am able to convert the mp3 file to PCM, but am NOT able to write the PCM file back as a mp3. I have followed and analyzed the Apple ExtAudioFileConvertTest sample and also cannot get that to work. The failure point is when I set the client format for the output file(set to a canonical PCM type). This fails with error "fmt?" if

How would you connect an iPod library asset to an Audio Queue Service and process with an Audio Unit?

寵の児 提交于 2019-12-21 17:37:11
问题 I need to process audio that comes from the iPod library. The only way to read an asset for the iPod library is AVAssetReader. To process audio with an Audio Unit it needs to be in stereo format so I have values for the left and right channels. But when I use AVAssetReader to read an asset from the iPod library it does not allow me to get it out in stereo format. It comes out in interleaved format which I do not know how to break into left and right audio channels. To get to where I need to

React Native Audio Visualization

安稳与你 提交于 2019-12-21 13:28:09
问题 So I am using the react-native-audio package to play preloaded audio files and capture the user's recorded audio. What I would like to do is convert the audio into some sort of data for visualization and analysis. There seems to be several options for web but not much in this direction specifically for React Native. How would I achieve this? Thank you. 回答1: I've just bump with this post, I am building a React Native Waveform visualiser, still work in progres with the android side, but its

CoreAudio AudioUnitSetProperty always fails to set Sample Rate

牧云@^-^@ 提交于 2019-12-21 12:21:13
问题 I need to change the output sample rate from 44.1 to 32.0, but it always throws an error, Out: AudioUnitSetProperty-SF=\217\325\377\377, -10865. I don't know why it will let me set it for input, but then not set it for output. My code is: - (void)applicationDidFinishLaunching:(NSNotification *)aNotification { OSStatus MyRenderer(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData){

Download, save, and play mp3 on an iPhone

会有一股神秘感。 提交于 2019-12-21 06:08:22
问题 I would like to download an mp3 file from some site, save it to my CoreData model, AudioMp3, and play it. The function below sort of works, but firstly, is inefficient in that it has to save the mp3 to file first, and secondly it plays a repeat of the same mp3 the following times it is called. I don't think my save to database is right either, where AudioMp3.mp3 is declared as binary: - (void) playAndSaveMp3: (NSString*) mp3 { NSURL *urlFromString = [NSURL URLWithString:[NSString