web-audio-api

Distortion in WebAudio API in iOS9?

江枫思渺然 提交于 2020-01-04 01:36:07
问题 I've been working on a cross platform Cordova app using WebAudio for sound synthesis and have recently begun having trouble with distorted audio output after upgrading my phone to iOS 9.2. Basically, on 2 out of 3 runs of the app on my phone, oscillator output will be buzzy and sound distorted, possibly as if it's running at the wrong sample rate. Prior to the upgrade I'd never encountered the issue, but now even a simple audio chain like so will end up manifesting the problem: this.osc =

Mute microphone in speakers but still be able to analyze (createAnalyser) with Web Audio Api?

旧城冷巷雨未停 提交于 2020-01-03 19:58:09
问题 Im trying to create an Analyser node to get the signal from a microphone, and be able to create a graphic with the received input. But I dont want to the speakers to still recive the microphone signal. Source ( microphone ) -> Analyser -> Destination( ? ) The destination is always the speakers... Can I put the destination to a void or similar, and be able to still analyze the microphone? I tried to play with the Volumne (gain node) but that affects the analyser in the end. In summary: I need

Manually put pcm data into AudioBuffer

依然范特西╮ 提交于 2020-01-03 17:22:27
问题 So I've pulled the channel data out of an AudioBuffer and sent it via transferable object to a web worker to do some processing on it, and now I want to put it back in. Do I really have to copy it back in like this? var myData = new Float32Array(audioBuf.length); var chanData = audioBuf.getChannelData(0); for ( var n = 0; n < chanData.length; n++ ) { chanData[n] = myData[n]; } I'm really hoping there is some way to just change out the ArrayBuffer each of the AudioBuffer channels reference.

Play PCM with javascript

不问归期 提交于 2020-01-01 07:03:30
问题 I got some problems playing PCM Audio on the browser. The PCM audio comes from an android device with udp-protocol and is saved on the server as *.raw I unsuccessfully trying to play this saved file with the help of webaudioapi. Using following code, plays me some creepy sound with white noise: var audioCtx = new (window.AudioContext || window.webkitAudioContext)(); audioCtx.sampleRate = 16000; // Stereo var channels = 1; // Create an empty two second stereo buffer at the // sample rate of

Web Audio API for live streaming?

瘦欲@ 提交于 2019-12-31 08:03:10
问题 We need to streaming live audio (from a medical device) to web browsers with no more than 3-5s of end-to-end delay (assume 200mS or less network latency). Today we use a browser plugin (NPAPI) for decoding , filtering (high, low, band), and playback of the audio stream (delivered via Web Sockets). We want to replace the plugin. I was looking at various Web Audio API demos and the most of our required functionality (playback, gain control, filtering) appears to be available in Web Audio API.

(Web Audio API) Oscillator node error: cannot call start more than once

我的未来我决定 提交于 2019-12-30 03:50:29
问题 When I start my oscillator, stop it, and then start it again; I get the following error: Uncaught InvalidStateError: Failed to execute 'start' on 'OscillatorNode': cannot call start more than once. Obviously I could use gain to "stop" the audio but that strikes me as poor practice. What's a more efficient way of stopping the oscillator while being able to start it again? code (jsfiddle) var ctx = new AudioContext(); var osc = ctx.createOscillator(); osc.frequency.value = 8000; osc.connect(ctx

decodeAudioData returning a null error

喜欢而已 提交于 2019-12-28 10:45:47
问题 I come here hoping that you lovely folks here on SO can help me out with a bit of a problem that I'm having. Specifically, every time I attempt to use the decodeAudioData method of a webkitAudioContext, it always triggers the error handler with a null error. This is the code that I'm currently using: var soundArray; var context = new webkitAudioContext(); function loadSound(soundName) { var request = new XMLHttpRequest(); request.open('GET',soundName); request.responseType = 'arraybuffer';

Why aren't Safari or Firefox able to process audio data from MediaElementSource?

和自甴很熟 提交于 2019-12-28 00:59:11
问题 Neither Safari or Firefox are able to process audio data from a MediaElementSource using the Web Audio API. var audioContext, audioProcess, audioSource, result = document.createElement('h3'), output = document.createElement('span'), mp3 = '//www.jonathancoulton.com/wp-content/uploads/encodes/Smoking_Monkey/mp3/09_First_of_May_mp3_3a69021.mp3', ogg = '//upload.wikimedia.org/wikipedia/en/4/45/ACDC_-_Back_In_Black-sample.ogg', gotData = false, data, audio = new Audio(); function connect() {

How to save sound with Web Audio API and Tone.js in a browser

穿精又带淫゛_ 提交于 2019-12-25 17:59:05
问题 Using the above mentioned library whats the easiest way to save tunes on a server ? I trying to think of a way to record the sounds in the browser and be able to save them to the server. And would I be able to store them in a Database ? what is the best way to do so , can anyone explain a bit ? 回答1: The easiest method is to use the MediaRecorder API to capture a MediaStream, to which your audio graph is connected to. This will get you a recording, typically in Opus (unless you want Vorbis,

Web Audio API: Pan audio (left/right) + controlling gain

不问归期 提交于 2019-12-25 16:29:43
问题 I want to create a very basic AudioContext()-instance playing a sound either on the left or on the right channel. I know, there is already an answer about this here: https://stackoverflow.com/a/20850704/1138860 My problem is, I have to control the gain via a GainNode. Whenever I conntect the GainNode, it makes the left/right channel to output sound again. I extended the original Example from the answer with a GainNode: http://jsbin.com/cofiwugeca/4/edit?js,output 回答1: It looks like you