audiobuffer

AudioUnit inputCallback with AudioUnitRender -> mismatch between audioBufferList.mBuffers[0].mDataByteSize != inNumberFrames

一个人想着一个人 提交于 2020-03-05 04:11:25
问题 We are using the AudioUnits input callback to process the incoming buffer. The audio unit setup is taken mostly from https://github.com/robovm/apple-ios-samples/blob/master/aurioTouch/Classes/AudioController.mm I have added some sanity check in the audio callback. It looks like this /// The audio input callback static OSStatus audioInputCallback(void __unused *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 __unused inBusNumber, UInt32

How to convert that UnsafeMutablePointer<UnsafeMutablePointer<Float>> variable into AudioBufferList?

ぃ、小莉子 提交于 2019-12-24 05:54:45
问题 I have this EZAudio method in my Swift project, to capture audio from the microphone: func microphone(microphone: EZMicrophone!, hasAudioReceived bufferList: UnsafeMutablePointer<UnsafeMutablePointer<Float>>, withBufferSize bufferSize: UInt32, withNumberOfChannels numberOfChannels: UInt32) { } But what I really need is to have that "bufferList" parameter coming in as an AudioBufferList type, in order to send those audio packets through a socket, just like I did in Objective C: //Objective C

Uncaught TypeError: Value is not of type AudioBuffer

我的未来我决定 提交于 2019-12-24 03:27:48
问题 I get this error when I try to run an XHR to load a sample. Uncaught TypeError: Value is not of type AudioBuffer. Everything seems to be right, but I'm not sure what the problem is. Kit.prototype.load = function(){ if(this.startedLoading) return; this.startedLoading = true; // var kick = "samples/M-808Sn2.wav"; var snare = "samples/M-808Sn2.wav"; // var hihat = "samples/M-808Sn2.wav"; // this.loadSample(0, kick, false); this.loadSample(1, snare, false); // this.loadSample(2, hihat, false); }

JS Audio - audioBuffer getChannelData to frequency

烈酒焚心 提交于 2019-12-23 05:23:21
问题 bsd I am trying achieve pitch detection, and moreover learn some basic audio physics on the way, I am actually really new to this and just trying to understand how this whole thing works... My question is, What is exactly the audioBuffer and how is the data coming from getChannelData related to frequencies. and how can I extract frequency data from the audioBuffer... Also, if someone can explain just a bit about sample rates etc. also this would be great. Thanks! 回答1: An AudioBuffer simply

How to release memory using Web Audio API?

本小妞迷上赌 提交于 2019-12-22 09:06:03
问题 var context = new window.AudioContext() var request = cc.loader.getXMLHttpRequest(); request.open("GET", 'res/raw-assets/resources/audio/bgm.mp3', true); request.responseType = "arraybuffer"; request.onload = function () { context["decodeAudioData"](request.response, function(buffer){ //success cc.log('success') window.buffer = buffer playBgm() }, function(){ //error }); }; request.onerror = function(){ //error }; request.send(); function playBgm(){ var audio = context["createBufferSource"]()

Swift 3: Using AVCaptureAudioDataOutput to analyze audio input

帅比萌擦擦* 提交于 2019-12-10 22:36:10
问题 I’m trying to use AVCaptureAudioDataOutput to analyze audio input, as described here . This is not stuff I could figure out on my own, so I’m copying the example, but I’m having difficulty. Xcode in Swift 3 has prompted me to make a couple of changes. I’m getting a compile error with the line assigning samples . Xcode says, “Cannot invoke initializer for type ‘UnsafeMutablePointer<_> with an argument list of type ‘(UnsafeMutableRawPointer?)’” Here’s the code as I’ve modified it: func

How to control the sound volume of (audio buffer) AudioContext()?

一曲冷凌霜 提交于 2019-12-06 06:07:39
问题 I have following AudioContext() sound object in JavaScript. Its volume is 100%. I want to play its volume in 10% (where volume = 0.1). How can I reduce its volume to 10%? const aCtx = new AudioContext(); let source = aCtx.createBufferSource(); let buf; fetch('https://dl.dropboxusercontent.com/s/knpo4d2yooe2u4h/tank_driven.wav') // can be XHR as well .then(resp => resp.arrayBuffer()) .then(buf => aCtx.decodeAudioData(buf)) // can be callback as well .then(decoded => { source.buffer = buf =

How to release memory using Web Audio API?

泄露秘密 提交于 2019-12-05 13:09:57
var context = new window.AudioContext() var request = cc.loader.getXMLHttpRequest(); request.open("GET", 'res/raw-assets/resources/audio/bgm.mp3', true); request.responseType = "arraybuffer"; request.onload = function () { context["decodeAudioData"](request.response, function(buffer){ //success cc.log('success') window.buffer = buffer playBgm() }, function(){ //error }); }; request.onerror = function(){ //error }; request.send(); function playBgm(){ var audio = context["createBufferSource"](); audio.buffer = buffer; var _volume = context['createGain'](); _volume['gain'].value = 1; _volume[

Capturing volume levels with AVCaptureAudioDataOutputSampleBufferDelegate in swift

こ雲淡風輕ζ 提交于 2019-12-05 02:48:34
问题 I'm trying to live volume levels using AVCaptureDevice etc it compiles and runs but the values just seem to be random and I keep getting overflow errors as well. EDIT: also is it normal for the RMS range to be 0 to about 20000? if let audioCaptureDevice : AVCaptureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeAudio){ try audioCaptureDevice.lockForConfiguration() let audioInput = try AVCaptureDeviceInput(device: audioCaptureDevice) audioCaptureDevice.unlockForConfiguration()

how to play nsdata audio buffering receive from web-socket?

寵の児 提交于 2019-12-04 06:02:20
问题 I'm creating a call app in objective c.my problem is in the send and receive audio stream.recording audio buffering and convert to nsdata and send with (base64 format) by socket rocket this is good work but I'm not know how to after receiving nsdata from server play this audio buffering? my code: viewController.h #import <UIKit/UIKit.h> #import <AudioToolbox/AudioQueue.h> #import <AudioToolbox/AudioFile.h> #import <SocketRocket/SocketRocket.h> #import <AVFoundation/AVFoundation.h> #import