web-audio-api

AudioWorklet error: DOMException: The user aborted a request

泄露秘密 提交于 2019-12-17 20:14:18
问题 I've successfully instantiated a simple AudioWorklet in React and wish to start a simple oscillator like in Google's example. In order to test run it, I am rendering a button whose onClick event calls the following: src/App.jsx: userGesture(){ //create a new AudioContext this.context = new AudioContext(); //Add our Processor module to the AudioWorklet this.context.audioWorklet.addModule('worklet/processor.js').then(() => { //Create an oscillator and run it through the processor let oscillator

Web audio api, stop sound gracefully

给你一囗甜甜゛ 提交于 2019-12-17 19:55:51
问题 The web audio api furnish the method .stop() to stop a sound. I want my sound to decrease in volume before stopping. To do so I used a gain node. However I'm facing weird issues with this where some sounds just don't play and I can't figure out why. Here is a dumbed down version of what I do: https://jsfiddle.net/01p1t09n/1/ You'll hear that if you remove the line with setTimeout() that every sound plays. When setTimeout is there not every sound plays. What really confuses me is that I use

How to create very basic left/right equal power panning with createPanner();

时光毁灭记忆、已成空白 提交于 2019-12-17 18:45:59
问题 I am looking at the web audio API spec and the panning node uses three values to create a 3D spectrum for sound. I was wondering if in order to create a basic 2D "equal power" panner the programmer needs to do the formulaic programming to scale this ... or if I am over thinking it and there is a simpler way to do it. EDIT There is now a stereoPanner node being introduced. 回答1: I can still get a panning effect by changing only the first argument to setPosition() and keeping other arguments

Changing Speed of Audio Using the Web Audio API Without Changing Pitch

回眸只為那壹抹淺笑 提交于 2019-12-17 18:37:49
问题 Is it possible to change the tempo of audio (in the form of loaded MP3 files) without changing the pitch using the Web Audio API? I'm aware of the playbackRate property on the AudioBufferSourceNode, but that also changes pitch. I'm also aware of the playbackRate property for <audio> and <video> elements, but I need to use the Web Audio API. I'm very new to the Web Audio API. Is there anything I can do? 回答1: There is a way to do this - its called granular synthesis (link points to a pd theory

Define 'valid mp3 chunk' for decodeAudioData (WebAudio API)

送分小仙女□ 提交于 2019-12-17 15:35:49
问题 I'm trying to use decodeAudioData to decode and play back an initial portion of a larger mp3 file, in javascript. My first, crude, approach was slicing a number of bytes off the beginning of the mp3 and feeding them to decodeAudioData. Not surprisingly this fails. After some digging it seems that decodeAudioData is only able to work with 'valid mp3 chunks' as documented by Fair Dinkum Thinkum, here. However there is no clarification about the structure of a valid mp3 chunk (the author of the

Should I disconnect nodes that can't be used anymore?

风流意气都作罢 提交于 2019-12-14 03:55:41
问题 I'm experimenting with Web Audio, and I made a function to play a note. var context = new (window.AudioContext || window.webkitAudioContext)() var make_triangle = function(destination, frequency, start, duration) { var osc = context.createOscillator() osc.type = "triangle" osc.frequency.value = frequency var gain = context.createGain() osc.connect(gain) gain.connect(destination) // timing osc.start(start) osc.stop(start + 2*duration) // this line is discussed later gain.gain.setValueAtTime(0

OfflineAudioContext FFT analysis with chrome

北慕城南 提交于 2019-12-14 03:39:39
问题 i'm trying to build a waveform generator that get audiofile amplitudes values and display them to a canvas as quick as possible (faster than realtime) in javascript. so i use the OfflineAudioContext / webkitOfflineAudioContext , load the file and start the analyse. the waveform is to fill a wide canvas. i analyse buffer in a processor.onaudioprocess function. (i guess it's the way it works ?) it works fine in firefox but i've got an issue in chrome : it seems it "jumps" over much analyse to

How to connect Web Audio API to Tone.js?

≡放荡痞女 提交于 2019-12-14 03:08:08
问题 I'm doing an Online Audio Player , so I want to integrate Pitch Shifter in my App, which is available on Tone js but not in Web Audio API ... So my idea is to connect Tonejs Pitch Shifter to Web Audio API's audioContext . Is there any possible ways? Here is my code for a reference var audioCtx = new (window.AudioContext || window.webkitAudioContext); var mediaElem = document.querySelector('audio'); var stream = audioCtx.createMediaElementSource(mediaElem); var gainNode = audioCtx.createGain()

Webaudio Playback from WebSocket has drop-outs

泄露秘密 提交于 2019-12-13 18:35:02
问题 I have a software-defined radio playing an audio stream from a WebSocket server, and a client which consumes the data and plays it using an AudioBufferSourceNode. It mostly works. The only problem is that there are momentary dropouts every few seconds, presumably caused by the overhead involved in creating each successive AudioBufferSourceNode instance. The WebAudio draft spec says that AudioBuffer should be used for playing sounds that are no longer than a minute or so, and that longer

Does MediaElementSource uses less memory than BufferSource in Web Audio API?

家住魔仙堡 提交于 2019-12-13 18:18:58
问题 I am making a little app that will play audio files (mp3,wav) with the ability to use an equalizer on them (say a regular Audio Player), for this I am using the Web Audio Api. I manage to get the play part in two ways. Using decodeAudioData of BaseAudioContext function getData() { source = audioCtx.createBufferSource(); var request = new XMLHttpRequest(); request.open('GET', 'viper.ogg', true); request.responseType = 'arraybuffer'; request.onload = function() { var audioData = request