web-audio-api

Streaming Live audio to the browser - Alternatives to the Web Audio API?

心已入冬 提交于 2019-12-12 02:09:08
问题 I am attempting to stream live audio from an iOS device to a web browser. The iOS device sends small, mono wav files (as they are recorded) through a web socket. Once the client receives the wav files, I have the Web Audio API decode and schedule them accordingly. This gets me about 99% of the way there, except I can hear clicks between each audio chunk. After some reading around, I have realized the likely source of my problem: the audio is being recorded at a sample rate of only 4k and this

The index is not in the allowed range

只愿长相守 提交于 2019-12-12 01:48:35
问题 I'm currently having an issue with the Tone.Analyzer in Safari 10.1. When initializing the Tone.Analyzer with a size > Math.pow(2, 10) (1024) I get the following error: IndexSizeError (DOM Exception 1): The index is not in the allowed range. I've also submitted this to the ToneJS repository, but I feel like this is more like an bug in Safari, right? Code import Tone from 'tone'; const sampleSize = Math.pow(2, 13); // Math.pow(2, 10); works... this.fft = new Tone.Analyser('fft', sampleSize);

How to modulate params from a Web Audio Api ScriptProcessor?

我只是一个虾纸丫 提交于 2019-12-12 00:28:59
问题 I am working on a Browser Synth with the Web Audio Api. Instead of using the "build in" OscillatorNode I want to develop a custom Oscillator model via the ScriptProcessorNode. I am able to modulate the AudioParams of the "Build in" nodes with other nodes. How can I connect internal Params of the ProcessorNode to other AudioNodes? 回答1: If you mean "how do I create AudioParam members of a ScriptProcessorNode that I can connect other sources to, to modulate my ScriptProcessor" - the short answer

click functions through references variables on a canvas-possible?

别说谁变了你拦得住时间么 提交于 2019-12-11 22:27:34
问题 Just run into a bit of wall with my current project. So for my comp musics course we have to create a 24 key (2 octave) keyboard by first rendering a keyboard using a canvas and then using web audio to loading and play 24 different sounds. I have gotten my clips successfully loaded into an array(or so I hope!) but am a bit confused as to how I would go about handling click events and playing each sound. Searching around the internet has only yielded results about handling click events through

Why does the stop method in AudioBufferNode destroy it?

南楼画角 提交于 2019-12-11 18:58:59
问题 We know that when you invoke #.stop() on an AudioBufferNode, you cannot then #.start() . Why is the behavior so? This issue came up when playing around with WebAudio API as we all find out sooner or later when trying to implement pause functionality. What piqued my interest was, sure, I understand that it's a stream and you can't simple "pause" a stream. But why does it get destroyed? Internally, is there not a pointer to the data, or does the data simply get pushed to the destination and

Testing A Library That Uses The Web Audio API With Mocha & Chai

≡放荡痞女 提交于 2019-12-11 17:42:07
问题 I am building a library which uses the web audio api(ToneJS to be more specific). I have tried using jsdom, mocha-jsdom with no success. I get this error - node_modules/tone/build/Tone.js:3869 this.input = this.output = this._gainNode = this.context.createGain(); Which makes sense and tells me that i need to use an environment with a context. I'm not even sure how i should setup the tests for my project. How should i setup a test environment correctly for my project? 回答1: I would suggest to

Correct handling of React Hooks for microphone audio

人盡茶涼 提交于 2019-12-11 17:05:26
问题 I'm trying to write a React Hook to handle streaming audio to an AudioContext which is analysed with Meyda. https://meyda.js.org/ I have managed to get the stream working and am able to pull out the data I want. However, I'm having trouble de-initialising the audio. If someone can offer me some guidance on setting up this hook correctly I'd be most grateful. I'm currently receiving the following error when I navigate away from a page using these hooks: Warning: Can't perform a React state

MediaRecorder.stop() doesn't clear the recording icon in the tab

萝らか妹 提交于 2019-12-11 16:33:50
问题 I start and stop a MediaRecorder stream. The red "recording" icon appears in the Chrome tab on start, but doesn't go away on stop. The icon looks like this: My code looks like this: const mediaRecorder = new MediaRecorder(stream); ... // Recording icon in the tab becomes visible. mediaRecorder.start(); ... // Recording icon is still visible. mediaRecorder.stop(); I also have a mediaRecorder.onstop handler defined. It doesn't return anything or interfere with the event object. What's the

Concatenate two audio blobs JavaScript

二次信任 提交于 2019-12-11 14:49:48
问题 I am using recorder.js to record two audio files on my web page, which then creates recordings as blobs. Once I have these audio blobs I would like to concatenate them into one track. How can I do it? 回答1: if you save the raw PCM of the orig clips from the callbacks on the buffer of the mic, i think you can just provide array of bufferd Clips to a new blob constructor. let recordedBlob = new Blob($ArrBuff[clip1,clip2], { type: "audio/*" }); recording.src = URL.createObjectURL(recordedBlob);

Can't pass a DOM element to a constructor function in Javascript when trying to abstract section of WebAudio API xhr request

有些话、适合烂在心里 提交于 2019-12-11 14:33:45
问题 My problem is this. When I add an argument to the audioBoing function below and then place the same argument in the getElementById string, the function doesn't work. I get an error that says uncaught type error, cannot call method 'AddEventListener' of null The function below works fine. I rewrote the function below it to reflect what I'm trying to do. Ultimately I am trying to abstract a good portion of the function so I can just plug in arguments and run it without having to rewrite it each