web-audio-api

play back generated sounds

南笙酒味 提交于 2019-12-07 19:39:49
问题 I want to capture audio (in my case from getUserMedia ) and play it back. I'm able to push a bunch of AudioBuffers to an array like so: var recorder = audio_context.createJavaScriptNode(256, 2, 2); recorder.onaudioprocess = function(e) { recorded.push(e.inputBuffer.getChannelData(0)); // or just: // recorded.push(e.inputBuffer); }; recorder.connect(audio_context.destination); But then how do I play the buffers in the recorded array? A way to merge these into one buffer and play it with

speex splitted audio data - WebAudio - VOIP

大憨熊 提交于 2019-12-07 13:20:20
问题 Im running a little app that encodes and decodes an audio array with the speex codec in javascript: https://github.com/dbieber/audiorecorder with a small array filled with a sin waveform for(var i=0;i<16384;i++) data.push(Math.sin(i/10)); this works. But I want to build a VOIP application and have more than one array. So if I split my array up in 2 parts encode>decode>merge, it doesn't sound the same as before. Take a look at this: fiddle: http://jsfiddle.net/exh63zqL/ Both buttons should

Recording video simultaneously with audio in chrome blocks on main thread, causing invalid audio

六月ゝ 毕业季﹏ 提交于 2019-12-07 12:54:00
问题 So, I have what I think is a fairly interesting and, hopefully, not intractable problem. I have an audio/video getUserMedia stream that I am recording in Chrome. Individually, the tracks record perfectly well. However, when attempting to record both, one blocks the main thread, hosing the other. I know that there is a way to resolve this. Muaz Khan has a few demos that seem to work without blocking. Audio is recorded via the web audio API. I am piping the audio track into a processor node

Record audio on multiple tracks simultaneously from multiple devices

╄→尐↘猪︶ㄣ 提交于 2019-12-07 12:16:18
问题 I'm currenly developing an audio web application using Web Audio API in javascript, but I found a problem: I need to record simultaneously from different devices to different tracks (imagine for example a sound card with 8 inputs recording in 8 buffers independently in orderto record a drummer), but I have not found any way to tell the AudioContext from which device you must record :( Can anybody help me? Thanks a lot :) 回答1: Well you can have multiple microphones - but only if they're

Pause Web Audio API sound playback

烂漫一生 提交于 2019-12-07 09:22:58
问题 How can i create a pause function for my audio? I already have a play function in my script below. http://pastebin.com/uRUQsgbh function loadSound(url) { var request = new XMLHttpRequest(); request.open('GET', url, true); request.responseType = 'arraybuffer'; // When loaded decode the data request.onload = function() { // decode the data context.decodeAudioData(request.response, function(buffer) { // when the audio is decoded play the sound playSound(buffer); }, onError); } request.send(); }

Javascript: UInt8Array to Float32Array

本秂侑毒 提交于 2019-12-07 08:37:09
问题 I have some audio buffer in usigned 8bit PCM format need to play via web audio which only accept signed 32bit PCM. And now I have ArrayBuffer for pieces of pcm_u8 data(come from Uint8array). How can I convert it to Float32Array? 回答1: This function converts an ArrayBuffer to Float32Array var convertBlock(buffer) { // incoming data is an ArrayBuffer var incomingData = new Uint8Array(buffer); // create a uint8 view on the ArrayBuffer var i, l = incomingData.length; // length, we need this for

Open stream_url of a Soundcloud Track via Client-Side XHR?

我的梦境 提交于 2019-12-07 07:49:40
问题 Since you can call the the Soundcloud API via XHR (because of the CORS headers it sends http://backstage.soundcloud.com/2010/08/of-cors-we-do/, right?) I was wondering if this was possible with the audio data itself, like a tracks' stream_url for example. When trying to open the stream_url with a XHR (from the Client Side) using the Web Audio API, i get a Origin is not allowed by Access-Control-Allow-Origin. error. Is there a way to load Audio resources via XHttpRequest from Client-Side

How do I compress multiple Web Audio sources/tracks into one?

回眸只為那壹抹淺笑 提交于 2019-12-07 06:13:57
问题 We are making an web based music editor and mixer based on the Web Audio api. Users can mix together multiple tracks, crop tracks, etc. The actual mixing together of the tracks just involves playing back all the sources at once. We want to be able to add the option to save the mix and make it available for download to a user's computer. Is there some way to do this on the front end (like connecting all the sources to one destination/export node), or even the backend (we are using RoR)? 回答1:

Live streaming using FFMPEG to web audio api

南楼画角 提交于 2019-12-07 06:13:54
问题 I am trying to stream audio using node.js + ffmpeg to browsers connected in LAN only using web audio api. Not using element because it's adding it's own buffer of 8 to 10 secs and I want to get maximum high latency possible (around 1 to 2 sec max). Audio plays successfully but audio is very choppy and noisy. Here is my node.js (server side) file: var ws = require('websocket.io'), server = ws.listen(3000); var child_process = require("child_process"); var i = 0; server.on('connection',

Float32 to Int16 - Javascript (Web Audio API)

穿精又带淫゛_ 提交于 2019-12-07 03:50:25
问题 I am trying to convert Float32 to Int16. But so far, is not effective. Because the output audio will generate lots of clippings (so, very poor audio output). I am using this function: function convertoFloat32ToInt16(buffer) { var l = buffer.length; //Buffer var buf = new Int16Array(l/3); while (l--) { if (l==-1) break; if (buffer[l]*0xFFFF > 32767) buf[l] = 32767; elseif (buffer[l]*0xFFFF < -32768) buf[l] = -32768; else buf[l] = buffer[l]*0xFFFF; } return buf.buffer; } If I implement the