core-audio

Generating a tone in iOS with 16 bit PCM, AudioEngine.connect() throws AUSetFormat: error -10868

痞子三分冷 提交于 2019-12-21 05:44:09
问题 I have the following code for generating an audio tone of given frequency and duration. It's loosely based on this answer for doing the same thing on Android (thanks: @Steve Pomeroy): https://stackoverflow.com/a/3731075/973364 import Foundation import CoreAudio import AVFoundation import Darwin class AudioUtil { class func play(frequency: Int, durationMs: Int) -> Void { let sampleRateHz: Double = 8000.0 let numberOfSamples = Int((Double(durationMs) / 1000 * sampleRateHz)) let factor: Double =

How do I check an MPMediaItem for MPMediaType of just audio?

[亡魂溺海] 提交于 2019-12-21 05:05:28
问题 I expect I need to do a bitwise comparison but I am unclear on how that is done in Objective-C syntax. The enum definition of MPMediaType is below. What I need to do is ensure the MPMediaItem is not video at all because AVAssetReader is choking on video files despite filtering to MPMediaTypeAnyAudio with my media query. How can I ensure the MPMediaItem is one of the only audio types? enum { // audio MPMediaTypeMusic = 1 << 0, MPMediaTypePodcast = 1 << 1, MPMediaTypeAudioBook = 1 << 2,

Is that possible to stream mms,ASX,RTSP stream on iPhone?

落花浮王杯 提交于 2019-12-21 04:25:17
问题 I am developing one music streaming application. I can stream mp3 using a method described here. Does anybody know approach to stream other formats(ASX, RTSP or mms) using Core Audio or other framework. Thanks in advance. 回答1: mms, ASX, and RTSP are historically somewhat proprietary protocols (by microsoft and real, in particular), so you may have trouble finding an official apple implementation. There's a LGPL implementation of the mms protocol here: https://launchpad.net/libmms Or you can

How to record and play audio simultaneously in iOS using Swift?

醉酒当歌 提交于 2019-12-21 04:00:44
问题 In Objective-C, recording and playing audio simultaneously is fairly simple. And there are tonnes of sample code on the internet. But I want to record and play audio simultaneously using Audio Unit/Core Audio in Swift. There are vary small amount of help and sample code on this using Swift. And i couldn't find any help which could show how to achieve this. I am struggling with the bellow code. let preferredIOBufferDuration = 0.005 let kInputBus = AudioUnitElement(1) let kOutputBus =

Change pitch of audio during playback on iPhone [duplicate]

ぃ、小莉子 提交于 2019-12-21 02:57:11
问题 This question already has answers here : Real-time Pitch Shifting on the iPhone (5 answers) Closed 3 years ago . What is the best way to accomplish this? From what I have read so far, it seems you have to setup IO Remote (which is a pain itself). Do you need to do a FFT? Are there any examples out there? Can I just speed up / slow down playback? Thanks 回答1: OpenAL lets you pitch-shift with the AL_PITCH source property. Maybe you could run your audio through OpenAL and use that. 回答2: I've

Convert .caf to .mp3 on the iPhone

岁酱吖の 提交于 2019-12-21 02:48:38
问题 is there a way to convert my recorded .caf files to .mp3 using the iPhone SDK / Core Audio, something else? I've been looking around for a while, but all I've found was a command line uitility (which isn't allowed to run on the iPhone). Regards 回答1: Since the iPhone shouldn't really be used for processor intensive things such as audio conversion, have you thought about transferring those files to your computer and running the conversion there using the command line tool you mentioned? I'm

Reading audio with Extended Audio File Services (ExtAudioFileRead)

不打扰是莪最后的温柔 提交于 2019-12-21 02:41:57
问题 I am working on understanding Core Audio, or rather: Extended Audio File Services Here, I want to use ExtAudioFileRead() to read some audio data from a file. This works fine as long as I use one single huge buffer to store my audio data (that is, one AudioBuffer ). As soon as I use more than one AudioBuffer , ExtAudioFileRead() returns the error code -50 ("error in parameter list"). As far as I can figure out, this means that one of the arguments of ExtAudioFileRead() is wrong. Probably the

Headphones plugin/out detection in Swift

时光毁灭记忆、已成空白 提交于 2019-12-20 17:24:13
问题 im working on an iphone app for iOS 8.1 that works with core audio to generate frequencies and adjust intensities. In the view controller that i generate the frequencies i need to control if the headphones are plugged out in some moment, i'm already controlling if headphones are connected before proceed to my frequencies generator view with the following function: - (BOOL)isHeadsetPluggedIn { AVAudioSessionRouteDescription* route = [[AVAudioSession sharedInstance] currentRoute]; for

How can I get the current sound level of the current audio output device?

徘徊边缘 提交于 2019-12-20 16:20:21
问题 I'm looking for a way to tap into the current audio output on a Mac, then return a value representing the current sound level. By sound level, I mean the amount of noise being generated by the output. I'm NOT asking how to get the current volume level of the output device. 回答1: the following code is pulled from Apple's Sample AVRecorder … this particular bit of code acquires a set of connections from this class's movieFileOutput's connections methods, gets the AVCaptureAudioChannel for each

AudioUnit tone generator is giving me a chirp at the end of each tone generated

人走茶凉 提交于 2019-12-20 15:32:08
问题 I'm creating a old school music emulator for the old GWBasic PLAY command. To that end I have a tone generator and a music player. Between each of the notes played I'm getting a chirp sound that mucking things up. Below are both of my classes: ToneGen.h #import <Foundation/Foundation.h> @interface ToneGen : NSObject @property (nonatomic) id delegate; @property (nonatomic) double frequency; @property (nonatomic) double sampleRate; @property (nonatomic) double theta; - (void)play:(float)ms; -