web-audio-api

OfflineAudioContext and FFT in Safari

社会主义新天地 提交于 2019-12-10 18:28:01
问题 I am using OfflineAudioContext to do waveform analysis in the background. All works fine in Chrome, Firefox and Opera but in Safari I get a very dodgy behaviour. The waveform should be composed by many samples (329), but in Safari the samples are only ~38. window.AudioContext = window.AudioContext || window.webkitAudioContext; window.OfflineAudioContext = window.OfflineAudioContext || window.webkitOfflineAudioContext; const sharedAudioContext = new AudioContext(); const audioURL = 'https://s3

Web Audio Api input from specific microphone

北慕城南 提交于 2019-12-10 17:46:48
问题 I'm using the Web Audio Api ( navigator.getUserMedia({audio: true}, function, function) ) for audio recording. If the user has several microphone devices, can I select the desired recording device? I've came across a problematic situation where a brand new Dell laptop (running Windows 8.1) has 2 microphone devices (on board and external) and the external device was set to default (set to default by the operation system). As expected, when recording, the input comes from the external

Web audio API and multiple inputs mic device

梦想的初衷 提交于 2019-12-10 17:22:17
问题 I have an audio device with 4 inputs microphones.. Someone knows if i can use all these inputs with Web audio API ? Thanks ! 回答1: It should work by calling getUserMedia four times, choosing a different device each time, and using createMediaStreamSource four times, although I haven't tested. 回答2: yes,you can list all input devices and select one to use. <html><body> <select id="devices_list"></select> <script> function devices_list(){ var handleMediaSourcesList = function(list){ for(i=0;i

alternative to audioContext.copyToChannel() in Safari and Edge

走远了吗. 提交于 2019-12-10 13:52:26
问题 Both Safari and Edge do not support the audioContext.copyToChannel() function to populate an audioBuffer with custom content. Is there any other way to do it? In my case, I want to create an impulse response, populate a buffer with that response and convolve some sound with that buffer. For Chrome and Firefox this works: buffer = audioCtx.createBuffer(numOfChannels, 1, sampleRate); buffer.copyToChannel(impulseResponse, 0); buffer.copyToChannel(impulseResponse, 1); convolverNode.buffer =

How can I stop a Web Audio Script Processor and clear the buffer?

浪子不回头ぞ 提交于 2019-12-10 13:39:54
问题 I'm trying to figure out a way to stop a web audio script processor node from running, without disconnecting it. My initial thought was to just set the "onaudioprocess" to "null" to stop it, but when I do this I hear a really short loop of audio playing. My guess is that the audio buffer is not being cleared or something and it's repeatedly playing the same buffer. I tried some additional techniques like first setting the buffer channel array values all to 0, then setting the "onaudioprocess"

Uncaught reference error BufferLoader is not defined

℡╲_俬逩灬. 提交于 2019-12-10 12:40:23
问题 Trying to learn the Audio API, but I get an Uncaught reference error for BufferLoader class. I'm on chrome and it's up to date. Shouldn't this class be working with no problems? <html> <head> <script type=text/javascript> window.onload = init; var context; var bufferLoader; function init(){ context = new webkitAudioContext(); bufferLoader = new BufferLoader( context, [ ' https://dl.dropboxusercontent.com/u/1957768/kdFFO3.wav', ' https://dl.dropboxusercontent.com/u/1957768/geniuse%20meodies

Web Audio API Stream: why isn't dataArray changing?

♀尐吖头ヾ 提交于 2019-12-10 12:08:20
问题 EDIT 2: solved. See answer below. EDIT 1: I changed my code a little, added a gain node, moved a function. I also found that IF I use the microphone, it will work. Still doesn't work with usb audio input. Any idea? This is my current code: window.AudioContext = window.AudioContext || window.webkitAudioContext; window.onload = function(){ var audioContext = new AudioContext(); var analyser = audioContext.createAnalyser(); var gainNode = audioContext.createGain(); navigator.mediaDevices

Sending Audio Blob to server

旧城冷巷雨未停 提交于 2019-12-10 11:29:53
问题 I am trying to send an audio blob produced by WebAudio API and Recorder.js to my Laravel Controller using jQuery's $.post method. Here is what I am trying. $('#save_oralessay_question_btn').click(function(e){ var question_content = $('#question_text_area').val(); var test_id_fr = parseInt($('#test_id_fr').val()); var question_type = parseInt($('#question_type').val()); var rubric_id_fr = $('#rubric_id_fr').val(); var reader = new FileReader(); reader.onload = function(event){ var form_data =

Web Audio API Firefox setValueCurveAtTime()

我的梦境 提交于 2019-12-10 11:15:28
问题 I am crossfading some audio and I have a equal power curve stored in a table. I'm calling this function to start the fadeout. The fade parameter is a GainNode made with createGain() fade.gain.setValueCurveAtTime(epCurveOut, context.currentTime, fadeTime); In Chrome and Safari all goes well, but Firefox (v30) complains: SyntaxError: An invalid or illegal string was specified Instead of context.currentTime I've tried 0 and 0.01. Is this method not implemented maybe? If so how would I

visualize mediastream which is coming from a remote peer connection

南笙酒味 提交于 2019-12-10 11:08:51
问题 Since some days I`m trying to visualize an audiostream which is coming over webrtc. We already wrote some visuals which are working fine for the normal local stream (webaudio microphone usage). Then I found some really interesting things on https://github.com/muaz-khan/WebRTC-Experiment/tree/master/ for streaming the microphone input between different browsers. We need this to have the same audio data from one backend for all clients in the frontend. Everything works fine and some tests