web-audio-api

How can I extract the preceding audio (from microphone) as a buffer when silence is detected (JS)?

早过忘川 提交于 2019-11-27 01:51:49
问题 I'm using the Google Cloud API for Speech-to-text, with a NodeJS back-end. The app needs to be able to listen for voice commands, and transmit them to the back-end as a buffer. For this, I need to send the buffer of the preceding audio when silence is detected. Any help would be appreciated. Including the js code below if (!navigator.getUserMedia) navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia; if

HTML5 web audio - convert audio buffer into wav file

主宰稳场 提交于 2019-11-27 01:38:44
问题 I have an audio buffer rendered using webkitOfflineAudioContext . Now, I wish to export it into a WAV file. How do I do it? I tried using recorder.js but couldn't figure out how to use it. Here's my code: http://jsfiddle.net/GBQV8/. 回答1: Here's a gist that should help: https://gist.github.com/kevincennis/9754325. I haven't actually tested this, so there might be a stupid typo or something, but the basic approach will work (I've done it before). Essentially, you're going to use the web worker

onaudioprocess not called on ios11

旧街凉风 提交于 2019-11-27 00:56:18
问题 I am trying to get audio capture from the microphone working on Safari on iOS11 after support was recently added However, the onaudioprocess callback is never called. Here's an example page: <html> <body> <button onclick="doIt()">DoIt</button> <ul id="logMessages"> </ul> <script> function debug(msg) { if (typeof msg !== 'undefined') { var logList = document.getElementById('logMessages'); var newLogItem = document.createElement('li'); if (typeof msg === 'function') { msg = Function.prototype

Create a waveform of the full track with Web Audio API

北战南征 提交于 2019-11-27 00:37:46
问题 Realtime moving Waveform I'm currently playing with Web Audio API and made a spectrum using canvas. function animate(){ var a=new Uint8Array(analyser.frequencyBinCount), y=new Uint8Array(analyser.frequencyBinCount),b,c,d; analyser.getByteTimeDomainData(y); analyser.getByteFrequencyData(a); b=c=a.length; d=w/c; ctx.clearRect(0,0,w,h); while(b--){ var bh=a[b]+1; ctx.fillStyle='hsla('+(b/c*240)+','+(y[b]/255*100|0)+'%,50%,1)'; ctx.fillRect(1*b,h-bh,1,bh); ctx.fillRect(1*b,y[b],1,1); } animation

How to use web audio api to get raw pcm audio?

心已入冬 提交于 2019-11-26 23:29:55
问题 How usergetmedia to use the microphone in chrome and then stream to get raw audio? I need need to get the audio in linear 16. 回答1: Unfortunately, the MediaRecorder doesn't support raw PCM capture. (A sad oversight, in my opinion.) Therefore, you'll need to get the raw samples and buffer/save them yourself. You can do this with the ScriptProcessorNode. Normally, this Node is used to modify the audio data programmatically, for custom effects and what not. But, there's no reason you can't just

Access microphone from a browser - Javascript

…衆ロ難τιáo~ 提交于 2019-11-26 19:35:08
问题 Is it possible to access the microphone (built-in or auxiliary) from a browser using client-side JavaScript? Ideally, it would store the recorded audio in the browser. Thanks! 回答1: Here we capture microphone audio as a Web Audio API event loop buffer using getUserMedia() - time domain and frequency domain snippets of each audio event loop buffer is printed (viewable in browser console just hit key F12 or ctrl+shift+i ) <html><head><meta http-equiv="Content-Type" content="text/html; charset

No sound on iOS 6 Web Audio API

ⅰ亾dé卋堺 提交于 2019-11-26 15:24:34
问题 I was really excited to see iOS 6 supports the Web Audio API, since we make HTML5 games. However, I cannot get iOS 6 to play any sound at all using the Web Audio API with examples that work fine in desktop Chrome. Here is a HTML5 game with touch controls and playing audio via the Web Audio API (if present - if not it will fall back to HTML5 audio): http://www.scirra.com/labs/sbios6b/ Edit: @Srikumar suggested some workarounds. I applied them at the version below. It still does not work! http:

Is there a way to use the Web Audio API to sample audio faster than real-time?

别来无恙 提交于 2019-11-26 10:57:34
问题 I\'m playing around with the Web Audio API & trying to find a way to import an mp3 (so therefore this is only in Chrome), and generate a waveform of it on a canvas. I can do this in real-time, but my goal is to do this faster than real-time. All the examples I\'ve been able to find involve reading the frequency data from an analyser object, in a function attached to the onaudioprocess event: processor = context.createJavascriptNode(2048,1,1); processor.onaudioprocess = processAudio; ...

Using local file for Web Audio API in Javascript

落花浮王杯 提交于 2019-11-26 09:27:24
问题 I\'m trying to get sound working on my iPhone game using the Web Audio API. The problem is that this app is entirely client side. I want to store my mp3s in a local folder (and without being user input driven) so I can\'t use XMLHttpRequest to read the data. I was looking into using FileSystem but Safari doesn\'t support it. Is there any alternative? Edit: Thanks for the below responses. Unfortunately the Audio API is horribly slow for games. I had this working and the latency just makes the