web-audio-api

MediaRecorder API simple example / “hello world”

时间秒杀一切 提交于 2019-12-25 14:58:09
问题 Here's a simple example for the MediaRecorder API: (async function() { let chunks = []; let stream = await navigator.mediaDevices.getUserMedia({ audio:true, video:false }); let mediaRecorder = new MediaRecorder(stream); // record for 3 seconds: mediaRecorder.start(); setTimeout(() => { mediaRecorder.stop(); }, 3000) mediaRecorder.ondataavailable = function(e) { chunks.push(e.data); }; mediaRecorder.onstop = async function() { let blob = new Blob(chunks, { type: mediaRecorder.mimeType }); let

BiquadFilterNode.Q for the NOTCH filter

佐手、 提交于 2019-12-25 09:35:05
问题 I cannot find any documentation on how the BiquadFilterNode.Q value works if you set the BiquadFilterNode.type to be 'notch' . The notch filter should, in practice, attenuate signals within a range of frequencies, but BiquadFilterNode.frequency seems to control "the center of the range of frequencies" and the Q value is said to control "the width of the frequency band". However, there is no info on what units would be used. Say, if I want to attenuate signals with frequencies between 300 Hz -

Record sound of a webaudio api's audio context

痞子三分冷 提交于 2019-12-25 07:08:33
问题 I am using web audio api in my project. Is there a way to record the audio data that's being sent to webkitAudioContext.destination? .wav files are playing in my browser, so there should be some way to store that data into a (.wav) file . i know this is possible, but not yet find any solution :( recorder.js can help me, but upto now i found it is only recording the microphone live input, is it possible to record my audio(.wav files) with the help of recorder.js? plz help i am using this

Web Audio API- onended event scope

与世无争的帅哥 提交于 2019-12-25 05:29:27
问题 I'm having a tricky issue with the Web Audio API AudioSourceBufferNode and its onended event. Basically what I'd like to do is have two AudioSourceBufferNodes that are each triggered when the other one finishes and keep playing back-and-forth. I understand that AudioSourceBufferNodes are pretty much done once you call start() and they're designed to be garbage-collected after this. So I tried to work around that like so: var source1; var source2; source1 = getSound(buffer1); source2 =

Replace HTML5 audio with Web Audio API

孤街浪徒 提交于 2019-12-25 01:59:43
问题 HTML5 audio on mobile devices has many limitations and actually I would call them bugs. My app implementing something like audio player. Actually all it works fine for desktop but in mobile version I encounter many bugs and have to do many workarounds different for different browsers and os to get it work, and still it sucks. I haven't dived into web audio api, but it seems to be designed for slightly different tasks. So my question is, is it make sense (and is it possible?) to replace HTML5

Cross application audio analysis with the Web Audio API

旧时模样 提交于 2019-12-24 08:46:29
问题 I am writing an audio visualizer application using the web audio api and the three.js library. I've had a lot of success using the html5 element to get audio from a local mp3 or a streaming mp3 file with the createMediaElementSource() method from the AudioContext object, but I was hoping my user would have the ability to visualize music from their itunes or Spotify apps as well. I've briefly touched on the 'audio jack api' but most of the documentation is over my head as I have no experience

Web Audio API on Android Chrome

梦想的初衷 提交于 2019-12-24 08:30:20
问题 I'm trying to test this audio recording example on Android devices with Chrome. According to this Web Audio API should be available on Android Chrome 37. The RecordRTC developer wrote here RecordRTC uses WebAudio API for stereo-audio recording. AFAIK, WebAudio is not supported on android chrome releases, yet. But now, it is listed as supported, so I assumed it should work. I ran the following tests (all with the demo page) Chrome 37 on windows - Works Chrome 37 on Galaxy S4 Androind - fails

AudioContext.decodeAudioData(…) not working on iPhone but working everywhere else

我与影子孤独终老i 提交于 2019-12-24 08:08:10
问题 I have the following very basic code which is part of a more complex problem. My problem here is the function: context.decodeAudioData(arrayBuffer) is not working on iPhone (tried on Safari and Chrome ) nor Mac (Safari), but it works everywhere else ( Android and Windows 10 (all browsers). Even it works on Mac (Chrome). On Mac (Safari) I get the following error: Unhandled Promise Rejection: TypeError: Not enough arguments Here is the code: window.AudioContext = window.AudioContext || window

Sound analysis without getUserMedia

混江龙づ霸主 提交于 2019-12-24 06:24:18
问题 I am trying to analyse the audio output from the browser, but I don't want the getUserMedia prompt to appear (which asks for microphone permission). The sound sources are SpeechSynthesis and an Mp3 file. Here's my code: return navigator.mediaDevices.getUserMedia({ audio: true }) .then(stream => new Promise(resolve => { const track = stream.getAudioTracks()[0]; this.mediaStream_.addTrack(track); this._source = this.audioContext.createMediaStreamSource(this.mediaStream_); this._source.connect

HTML5 How to replace WebAudio API for Internet Explorer for javascript games?

Deadly 提交于 2019-12-24 03:36:22
问题 I'm new with audio in html. I found some nice examples for little javascript games. Want I try to load the games in Internet Explorer, I got : "Web api audio is not supported in this browser". I found this site : http://caniuse.com/#feat=audio-api and look like Internet Explorer doesn't support it. I found also SoundManager 2 that seem to work on all browsers. My question, is there is a way to detect if the browser support WebApiAudio and offering a fallback if is not supported ? I want to be